It's possible that when attempting to fetch a saved bookmark that there is an error fetching the link metadata. Instead of trying to fetch it again right away we should create a process to automatically try again, periodically, and in bulk jobs.
My thinking is we have a job that runs every hour and processes say 1500 pending BaseURL's at a time. We will add 3 new fields to the base_url
table.
error_count
- the number of times we've had an error fetching the urllast_error_date
- The last time we attempted to fetch the urlmetadata
- A JSONB array to store error messages and datesWe set a max limit to error count to say 3. The periodic task will look for entries that have public_ready=false
, error count of greater than 0 and less than 3, and that the last error attempt happened 24+ hours ago. Then it will process it again and update the error date and error log if it fails again.
After 3 error attempts we just never process that link again as it's probably a bad link or just not worth the continued effort.
This was completed here:
https://git.code.netlandish.com/~netlandish/links/commit/02c85f04f6867069e8f9f8296841f4a4a017b39d