~netlandish/links#100: 
Add periodic task to process pending BaseURL instance

It's possible that when attempting to fetch a saved bookmark that there is an error fetching the link metadata. Instead of trying to fetch it again right away we should create a process to automatically try again, periodically, and in bulk jobs.

My thinking is we have a job that runs every hour and processes say 1500 pending BaseURL's at a time. We will add 3 new fields to the base_url table.

  • error_count - the number of times we've had an error fetching the url
  • last_error_date - The last time we attempted to fetch the url
  • metadata - A JSONB array to store error messages and dates

We set a max limit to error count to say 3. The periodic task will look for entries that have public_ready=false, error count of greater than 0 and less than 3, and that the last error attempt happened 24+ hours ago. Then it will process it again and update the error date and error log if it fails again.

After 3 error attempts we just never process that link again as it's probably a bad link or just not worth the continued effort.

Status
RESOLVED IMPLEMENTED
Submitter
~petersanchez
Assigned to
No-one
Submitted
2 months ago
Updated
a month ago
Labels
task