1. Focus on Stable International/Regional Sources - Improved TED EU scraper (5 search strategies, 5 pages each) - All stable sources now hourly (TED EU, Sell2Wales, PCS Scotland, eTendersNI) - De-prioritize unreliable UK gov sites (100% removal rate) 2. Archival Feature - New DB columns: archived, archived_at, archived_snapshot, last_validated, validation_failures - Cleanup script now preserves full tender snapshots before archiving - Gradual failure handling (3 retries before archiving) - No data loss - historical record preserved 3. Email Alerts - Daily digest (8am) - all new tenders from last 24h - High-value alerts (every 4h) - tenders >£100k - Professional HTML emails with all tender details - Configurable via environment variables Expected outcomes: - 50-100 stable tenders (vs 26 currently) - Zero 404 errors (archived data preserved) - Proactive notifications (no missed opportunities) - Historical archive for trend analysis Files: - scrapers/ted-eu.js (improved) - cleanup-with-archival.mjs (new) - send-tender-alerts.mjs (new) - migrations/add-archival-fields.sql (new) - THREE_IMPROVEMENTS_SUMMARY.md (documentation) All cron jobs updated for hourly scraping + daily cleanup + alerts
33 lines
1.1 KiB
JavaScript
33 lines
1.1 KiB
JavaScript
import { chromium } from 'playwright';
|
|
|
|
const browser = await chromium.launch({ headless: true });
|
|
const page = await browser.newPage();
|
|
|
|
const url = 'https://ted.europa.eu/en/search/result?q=&page=1&placeOfPerformanceCountry=GBR';
|
|
console.log('Loading:', url);
|
|
|
|
await page.goto(url, { waitUntil: 'networkidle', timeout: 30000 });
|
|
await page.waitForTimeout(3000);
|
|
|
|
// Save screenshot
|
|
await page.screenshot({ path: '/tmp/ted-screenshot.png', fullPage: false });
|
|
console.log('Screenshot saved to /tmp/ted-screenshot.png');
|
|
|
|
// Get page HTML sample
|
|
const bodyHTML = await page.evaluate(() => {
|
|
return document.body.innerHTML.substring(0, 5000);
|
|
});
|
|
|
|
console.log('\nFirst 5000 chars of body HTML:');
|
|
console.log(bodyHTML);
|
|
|
|
// Try to find any links to notices
|
|
const noticeLinks = await page.evaluate(() => {
|
|
const links = Array.from(document.querySelectorAll('a[href*="/notice/"], a[href*="Notice"]'));
|
|
return links.slice(0, 5).map(a => ({ href: a.href, text: a.textContent.trim().substring(0, 100) }));
|
|
});
|
|
|
|
console.log('\nFound notice links:', JSON.stringify(noticeLinks, null, 2));
|
|
|
|
await browser.close();
|