
Web Crawler Developer
Skills & Stack
About the Role
As a Web Crawler Developer, your role is to develop scripts to extract data from multiple websites and maintain the data pipelines. You will develop a deep understanding of our vast data sources on the web and know exactly how, when, and which data to scrape, parse, and store. You will leverage existing frameworks and processes for scraping and ingesting web content, and develop frameworks for automating and maintaining a constant flow of data from multiple sources. You will be responsible for maintaining web crawlers and frameworks.
About Stealth Startup
We are a small disruptive GenAI startup with agentic AI capabilities. At our core, we are dedicated to crafting science-based solutions that prioritize social impact.
What You'll Do
- 1Develop scripts to extract data from multiple websites
- 2Maintain data pipelines and ensure data flow
- 3Leverage existing frameworks and processes for scraping and ingesting web content
- 4Develop frameworks for automating and maintaining data flow
- 5Maintain web crawlers and frameworks
What We're Looking For
- 2+ years of experience in building crawler/web-scraping applications
- Knowledge of web scraping libraries and frameworks (Scrapy, Selenium, Beautiful Soup)
- Experience analyzing HTML and CSS code to identify and extract data
- Coding experience in Python, XPath, IDEs, APIs, and multithreading
- Algorithmic skills, such as developing algorithms to detect and remove around main textual content
- Experience working with databases
- Proficient knowledge of Python language with hands-on experience in database integrations
- Experience with data parsing, data mining, data analytics, etc.
- Data visualization experience is preferred
Ready to apply for this role?
Web Crawler Developer at Stealth Startup — click below to submit your application.
