Indeed.com Data Scraping, Indeed.com Web Scraping, Indeed.com Website Scraping, Extract Indeed.com site, Indeed.com Database, Website Data Scraping, Email Scraping services, Indeed.com Scraping, Indeed.com Extraction, Indeed.com Data Extraction

Friday, 30 December 2016

How Data Mining is Useful to Companies?

How Data Mining is Useful to Companies?

Every business, organization and government bodies are collecting large amount of data for research and development. Such huge database can make them to have the information on hand when required. But most important is that it takes much time to find important information from the data. "If you want to grow rapidly, you must take quick and accurate decisions to grab timely available opportunities."

By applying the process of data mining, you can easily extract and filter required information from data. It is a processing of refining data and extracting important information. This process is mainly divided into 3 sections; pre-processing, mining and validation. In pre-processing, large amount of relevant data are collected. The mining section includes data classification, clustering, error correction and linking information. The last but important is validate without which you can not make trust on information. In short, data mining is a process of converting data into authentic information.

Let's have look on how data mining is useful to companies.

Fast and Feasible Decisions: To search information from huge bundle of data require more time. It also irritates a person who is doing such. With annoyed mind one can not take accurate decisions that's for sure. By having help of data mining, one can easily get information and make fast decisions. It also helps to compare information with various factors so the decisions become more reliable. Data mining is helpful in every decision to make it quick and feasible.

Powerful Strategies: After data mining, information becomes precise and easy to understand. While making strategies, one can easily analyze information in various dimensions. This analysis helps to get real idea about the strategy implementation. Management bodies can implement powerful strategies effectively to expand business boundaries.

Competitive Advantage: Information is easily available and precise so that one can compare it with competitors' information. It is very much required that you must compare the data otherwise you will have to suffer in business. After doing competitive analysis, one can make corrective decisions to go ahead from competitors. This way company can gain competitive advantage.

Your business can get all the benefits of data mining at cutting rates through outsourcing.

Source : http://ezinearticles.com/?How-Data-Mining-is-Useful-to-Companies?&id=2835042

Saturday, 24 December 2016

Data Scrapping

Data Scrapping

People who are involved in business activities might have came across a term Data Scrapping. It is a process in which data or information can be extracted from the Portable Document Format file. They are easy to use tools that can automatically arrange the data that are found in different format in the internet. These advanced tools can collect useful information's according to the need of the user. What the user needs to do is simply enter the key words or phrases and the tool will extract all the related information available from the Portable Document Format file. It is widely used to take information's from the no editable format.

The main advantage of Portable Document Format files are they protect the originality of the document when you convert the data from Word to PDF. The size of the file is reduced by compression algorithems when the file are heavier due to the graphics or the images in the content. A Portable Document Format is independent of any software or hardware for installation. It allows encryption of files which enhances the security of your contents.

Although the Portable Document Format files have many advantages,it too have many other challenges. For example, you want to access a data that you found on the internet and the author encrypted the file preventing you from printing the file, you can easily do the scrapping process. These functions are easily available on the internet and the user can choose according to their needs. Using these programs you can extract the data that u need.

Source : http://ezinearticles.com/?Data-Scrapping&id=4951020

Wednesday, 14 December 2016

Data Extraction - A Guideline to Use Scrapping Tools Effectively

Data Extraction - A Guideline to Use Scrapping Tools Effectively

So many people around the world do not have much knowledge about these scrapping tools. In their views, mining means extracting resources from the earth. In these internet technology days, the new mined resource is data. There are so many data mining software tools are available in the internet to extract specific data from the web. Every company in the world has been dealing with tons of data, managing and converting this data into a useful form is a real hectic work for them. If this right information is not available at the right time a company will lose valuable time to making strategic decisions on this accurate information.

This type of situation will break opportunities in the present competitive market. However, in these situations, the data extraction and data mining tools will help you to take the strategic decisions in right time to reach your goals in this competitive business. There are so many advantages with these tools that you can store customer information in a sequential manner, you can know the operations of your competitors, and also you can figure out your company performance. And it is a critical job to every company to have this information at fingertips when they need this information.

To survive in this competitive business world, this data extraction and data mining are critical in operations of the company. There is a powerful tool called Website scraper used in online digital mining. With this toll, you can filter the data in internet and retrieves the information for specific needs. This scrapping tool is used in various fields and types are numerous. Research, surveillance, and the harvesting of direct marketing leads is just a few ways the website scraper assists professionals in the workplace.

Screen scrapping tool is another tool which useful to extract the data from the web. This is much helpful when you work on the internet to mine data to your local hard disks. It provides a graphical interface allowing you to designate Universal Resource Locator, data elements to be extracted, and scripting logic to traverse pages and work with mined data. You can use this tool as periodical intervals. By using this tool, you can download the database in internet to you spread sheets. The important one in scrapping tools is Data mining software, it will extract the large amount of information from the web, and it will compare that date into a useful format. This tool is used in various sectors of business, especially, for those who are creating leads, budget establishing seeing the competitors charges and analysis the trends in online. With this tool, the information is gathered and immediately uses for your business needs.

Another best scrapping tool is e mailing scrapping tool, this tool crawls the public email addresses from various web sites. You can easily from a large mailing list with this tool. You can use these mailing lists to promote your product through online and proposals sending an offer for related business and many more to do. With this toll, you can find the targeted customers towards your product or potential business parents. This will allows you to expand your business in the online market.

There are so many well established and esteemed organizations are providing these features free of cost as the trial offer to customers. If you want permanent services, you need to pay nominal fees. You can download these services from their valuable web sites also.

Source:http://ezinearticles.com/?Data-Extraction---A-Guideline-to-Use-Scrapping-Tools-Effectively&id=3600918

Friday, 9 December 2016

Scraping in PDF Files - Improving Accessibility

Scraping in PDF Files - Improving Accessibility

Scraping of data is one procedure where mechanically information is sorted out that is contained on the Net in HTML, PDF and various other documents. It is also about collecting relevant data and saving it in spreadsheets or databases for retrieval purposes. On a majority of sites, text content can be easily accessed in the source code however a good number of business houses are making use of Portable Document Format. This format had been launched by Adobe and documents in this format can be easily viewed on almost any operating system. Some people convert documents from word to PDF when they need sending files over the Net and many convert PDF to word so that they could edit their documents. The best benefit that one gets for making use of it is that documents look a replica of the original and there is no form of disturbance in viewing them as they appear organized and same on almost all operating systems. The downside of the format is that text in such files is converted into a picture or image and then copying and pasting it is not possible any more.

Scraping in this format is a procedure where data is scraped that is available in such files. Most diverse of the tools is needed in order to carry out scraping in a document that is created in this format. You'd find two main forms of PDF files where one is built from a text file and the other firm is where it is built from some image. There is software brought by Adobe itself which can capably do scraping in text based files. For files that are image-based, there is a need to make use of special application for the task.

OCR program is one primary tool to be used for such a matter. Optical Recognition Program is capable in scanning documents for small picture that can be segregated into letters. The pictures are compared with actual letters and given they match well; the letters get copied into one file. These programs are able to do scraping in an apt way in image-based files pretty much aptly however it cannot be said that they are perfect. Once the procedure is done you could search through data so as to find those areas and parts which you had been looking for. More often than not it is difficult to find a utility that can obtain exact data that is needed without proper customization. But if thoroughly checked, you cou

Source: http://ezinearticles.com/?Scraping-in-PDF-Files---Improving-Accessibility&id=6108439

Monday, 5 December 2016

Data Discovery vs. Data Extraction

Data Discovery vs. Data Extraction

Looking at screen-scraping at a simplified level, there are two primary stages involved: data discovery and data extraction. Data discovery deals with navigating a web site to arrive at the pages containing the data you want, and data extraction deals with actually pulling that data off of those pages. Generally when people think of screen-scraping they focus on the data extraction portion of the process, but my experience has been that data discovery is often the more difficult of the two.

The data discovery step in screen-scraping might be as simple as requesting a single URL. For example, you might just need to go to the home page of a site and extract out the latest news headlines. On the other side of the spectrum, data discovery may involve logging in to a web site, traversing a series of pages in order to get needed cookies, submitting a POST request on a search form, traversing through search results pages, and finally following all of the "details" links within the search results pages to get to the data you're actually after. In cases of the former a simple Perl script would often work just fine. For anything much more complex than that, though, a commercial screen-scraping tool can be an incredible time-saver. Especially for sites that require logging in, writing code to handle screen-scraping can be a nightmare when it comes to dealing with cookies and such.

In the data extraction phase you've already arrived at the page containing the data you're interested in, and you now need to pull it out of the HTML. Traditionally this has typically involved creating a series of regular expressions that match the pieces of the page you want (e.g., URL's and link titles). Regular expressions can be a bit complex to deal with, so most screen-scraping applications will hide these details from you, even though they may use regular expressions behind the scenes.

As an addendum, I should probably mention a third phase that is often ignored, and that is, what do you do with the data once you've extracted it? Common examples include writing the data to a CSV or XML file, or saving it to a database. In the case of a live web site you might even scrape the information and display it in the user's web browser in real-time. When shopping around for a screen-scraping tool you should make sure that it gives you the flexibility you need to work with the data once it's been extracted.

Source: http://ezinearticles.com/?Data-Discovery-vs.-Data-Extraction&id=165396

Wednesday, 30 November 2016

An Easy Way For Data Extraction

An Easy Way For Data Extraction

There are so many data scraping tools are available in internet. With these tools you can you download large amount of data without any stress. From the past decade, the internet revolution has made the entire world as an information center. You can obtain any type of information from the internet. However, if you want any particular information on one task, you need search more websites. If you are interested in download all the information from the websites, you need to copy the information and pate in your documents. It seems a little bit hectic work for everyone. With these scraping tools, you can save your time, money and it reduces manual work.

The Web data extraction tool will extract the data from the HTML pages of the different websites and compares the data. Every day, there are so many websites are hosting in internet. It is not possible to see all the websites in a single day. With these data mining tool, you are able to view all the web pages in internet. If you are using a wide range of applications, these scraping tools are very much useful to you.

The data extraction software tool is used to compare the structured data in internet. There are so many search engines in internet will help you to find a website on a particular issue. The data in different sites is appears in different styles. This scraping expert will help you to compare the date in different site and structures the data for records.

And the web crawler software tool is used to index the web pages in the internet; it will move the data from internet to your hard disk. With this work, you can browse the internet much faster when connected. And the important use of this tool is if you are trying to download the data from internet in off peak hours. It will take a lot of time to download. However, with this tool you can download any data from internet at fast rate.There is another tool for business person is called email extractor. With this toll, you can easily target the customers email addresses. You can send advertisement for your product to the targeted customers at any time. This the best tool to find the database of the customers.

However, there are some more scraping tolls are available in internet. And also some of esteemed websites are providing the information about these tools. You download these tools by paying a nominal amount.

Source: http://ezinearticles.com/?An-Easy-Way-For-Data-Extraction&id=3517104

Saturday, 26 November 2016

How to scrape search results from search engines like Google, Bing and Yahoo

How to scrape search results from search engines like Google, Bing and Yahoo

Search giants like Google, Yahoo and Bing made their empire on scraping others content. However, they don’t want you to scrape them. How ironic, isn’t it?

Search engine performance is a very important metric all digital marketers want to measure and improve. I’m sure you will be using some great SEO tools to check how your keywords perform. All great SEO tool comes with a search keyword ranking feature. The tools will tell you how your keywords are performing in google, yahoo bing etc.

 How will you get data from search engines If you want to build a keyword ranking app?

 These search engines have API’s but the daily query limit is very low and not useful for the commercial purpose. The only solution is to scrape search results. Search engine giants obviously know this :). Once they know that you are scraping, they will  block your IP, Period!

 How do Search engines detect bots?

 Here are the common methods of detection of bots.

* IP address: Search engines can detect if there are too many requests coming from a single IP. If a high amount of traffic is detected, they will throw a captcha.

 * Search patterns: Search engines match traffic patterns to an existing set of patterns and if there is huge variation, they will classify this as a bot.

 If you don’t have access to sophisticated technology, it is impossible to scrape search engines like google, Bing or Yahoo.

 How to avoid detection

There are some things you can do to  avoid detection.

    Scrape slowly and don’t try to squeeze everything at once.
    Switch user agents between queries
    Scrape randomly and don’t follow the same pattern
    Use intelligent IP rotations
    Clear Cookies after each IP change or disable them completely

Thanks for reading this blog post.

Source: http://blog.datahut.co/how-to-scrape-search-results-from-search-engines-like-google-bing-and-yahoo/

Wednesday, 9 November 2016

Outsource Data Mining Services to Offshore Data Entry Company

Outsource Data Mining Services to Offshore Data Entry Company

Companies in India offer complete solution services for all type of data mining services.


Data Mining Services and Web research services offered, help businesses get critical information for their analysis and marketing campaigns. As this process requires professionals with good knowledge in internet research or online research, customers can take advantage of outsourcing their Data Mining, Data extraction and Data Collection services to utilize resources at a very competitive price.

In the time of recession every company is very careful about cost. So companies are now trying to find ways to cut down cost and outsourcing is good option for reducing cost. It is essential for each size of business from small size to large size organization. Data entry is most famous work among all outsourcing work. To meet high quality and precise data entry demands most corporate firms prefer to outsource data entry services to offshore countries like India.

In India there are number of companies which offer high quality data entry work at cheapest rate. Outsourcing data mining work is the crucial requirement of all rapidly growing Companies who want to focus on their core areas and want to control their cost.

Why outsource your data entry requirements?

Easy and fast communication: Flexibility in communication method is provided where they will be ready to talk with you at your convenient time, as per demand of work dedicated resource or whole team will be assigned to drive the project.

Quality with high level of Accuracy: Experienced companies handling a variety of data-entry projects develop whole new type of quality process for maintaining best quality at work.

Turn Around Time: Capability to deliver fast turnaround time as per project requirements to meet up your project deadline, dedicated staff(s) can work 24/7 with high level of accuracy.

Affordable Rate: Services provided at affordable rates in the industry. For minimizing cost, customization of each and every aspect of the system is undertaken for efficiently handling work.

Outsourcing Service Providers are outsourcing companies providing business process outsourcing services specializing in data mining services and data entry services. Team of highly skilled and efficient people, with a singular focus on data processing, data mining and data entry outsourcing services catering to data entry projects of a varied nature and type.

Why outsource data mining services?

360 degree Data Processing Operations
Free Pilots Before You Hire
Years of Data Entry and Processing Experience
Domain Expertise in Multiple Industries
Best Outsourcing Prices in Industry
Highly Scalable Business Infrastructure
24X7 Round The Clock Services

The expertise management and teams have delivered millions of processed data and records to customers from USA, Canada, UK and other European Countries and Australia.

Outsourcing companies specialize in data entry operations and guarantee highest quality & on time delivery at the least expensive prices.

Herat Patel, CEO at 3Alpha Dataentry Services possess over 15+ years of experience in providing data related services outsourced to India.

Visit our Facebook Data Entry profile for comments & reviews.

Our services helps to convert any kind of  hard copy sources, our data mining services helps to collect business contacts, customer contact, product specifications etc., from different web sources. We promise to deliver the best quality work and help you excel in your business by focusing on your core business activities. Outsource data mining services to India and take the advantage of outsourcing and save cost.

Source: http://ezinearticles.com/?Outsource-Data-Mining-Services-to-Offshore-Data-Entry-Company&id=4027029

Monday, 24 October 2016

Scraping Yelp Data and How to use?

Scraping Yelp Data and How to use?

We get a lot of requests to scrape data from Yelp. These requests come in on a daily basis, sometimes several times a day. At the same time we have not seen a good business case for a commercial project with scraping Yelp.

We have decided to release a simple example Yelp robot which anyone can run on Chrome inside your computer, tune to your own requirements and collect some data. With this robot you can save business contact information like address, postal code, telephone numbers, website addresses etc.  Robot is placed in our Demo space on Web Robots portal for anyone to use, just sign up, find the robot and use it.

How to use it:

    Sign in to our portal here.
    Download our scraping extension from here.
    Find robot named Yelp_us_demo in the dropdown.
    Modify start URL to the first page of your search results. For example: http://www.yelp.com/search?find_desc=Restaurants&find_loc=Arlington,+VA,+USA
    Click Run.
    Let robot finish it’s job and download data from portal.

Some things to consider:

This robot is placed in our Demo space – therefore it is accessible to anyone. Anyone will be able to modify and run it, anyone will be able to download collected data. Robot’s code may be edited by someone else, but you can always restore it from sample code below. Yelp limits number of search results, so do not expect to scrape more results than you would normally see by search.

In case you want to create your own version of such robot, here it’s full code:

// starting URL above must be the first page of search results.
// Example: http://www.yelp.com/search?find_desc=Restaurants&find_loc=Arlington,+VA,+USA

steps.start = function () {

   var rows = [];

   $(".biz-listing-large").each (function (i,v) {
     if ($("h3 a", v).length > 0)
       {
        var row = {};
        row.company = $(".biz-name", v).text().trim();
        row.reviews =$(".review-count", v).text().trim();
        row.companyLink = $(".biz-name", v)[0].href;
        row.location = $(".secondary-attributes address", v).text().trim();
        row.phone = $(".biz-phone", v).text().trim();
        rows.push (row);
      }
   });

   emit ("yelp", rows);
   if ($(".next").length === 1) {
     next ($(".next")[0].href, "start");
   }
 done();
};

Source: https://webrobots.io/scraping-yelp-data/

Thursday, 13 October 2016

How to use Web Content Extractor(WCE) as Email Scraper?

How to use Web Content Extractor(WCE) as Email Scraper?

Web Content Extractor is a great web scraping software developed by Newprosoft Team. The software has easy to use project wizard to create a scraping configuration and scrape data from websites.

One day I came to see the Visual Email Extractor which is also product of Newprosoft and similar to Web Content Extractor but it’s primary use is to scrape email addresses by crawling websites you feed to the scraper. I had noticed that with the little modification in Web Content Extractor project configuration you can use it same as Visual Email Extractor to extract email addresses.

In this post I will show you what configuration makes the Web Content Extractor to extract email addresses. I still recommend Visual Email Extractor as it has lot more features then extracting email using WCE.

Here are the configuration that makes WCE to Extract Emails.

Step 1 : Open Web Content Extractor and Create New Project and Click on Next.

Step 2:  Under Crawling Rules -> Advanced Rules Tab do the following settings

Crawling Level 1 Settings

Follow Links if link text equals:
*contact*; *feedback*; *support*; *about*

for 'Follow Links if link text equals' text box enter following values:
contact; feedback; support; about

for 'Do not Follow links if URL contains' text box enter following values:

google.; yahoo.; bing; msn.; altavista.; myspace.com; youtube.com; googleusercontent.com; =http; .jpg; .gif; .png; .bmp; .exe; .zip; .pdf;

Set 'Maximum Crawling Deapth' to 2

set 'Crawling Order' to Deapth First Crawling

Tick mark below below check boxes:

->Follow all internal links

  Crawling Level 2  Settings

set 'Follow links if link text equals' to below value

*contact*; *feedback*; *support*; *about*

set 'Follow links if url contains' text box to below value

contact; feedback; support; about

set 'DO NOT follow links if url contains' text box to below value

=http

Step 3 After doing above settings now click on Next  -> in Extraction Pattern window -> Click on Define ->  in Web Page Address (URL) give any URL where email is given.  and click on  + sign right of Date Fields to define scraping pattern.

Now inside HTML Structure selects HTML check box or Body check box which means for each page it will take whole page content to parse data.

Now last settings to extract emails from page using regular expression based email extraction function.  Open Predefined Script window and select ‘Extract_Email_Addresses‘ and click on OK. and if you have used page that contains email then in Script Result’ you will be able to see the harvested email.

Hope this will help you to use your Web Content Extractor as a Email Scraper.. Share your view in comment.

Source: http://webdata-scraping.com/use-web-content-extractor-as-email-scraper/

Wednesday, 21 September 2016

Run Code Template – New Feature Added to Fminer Web Scraping Tool

Run Code Template – New Feature Added to Fminer Web Scraping Tool

Fminer is one of the powerful web scraping software, I already given brief of all the Fminer features in previous post. In this post I am going to introduce one of the interesting feature of fminer which is Run Code Template that is recently added to Fminer, this feature is similar to “Fminer Run Code” action but it’s different in a way you can use it. The Run Code Action you can use inside the data scraping flow and python code get executed when scraper start running.

While Run Code Templates are the saved python code snippets that you can run on the data tables after scraping completes. Assume if you get white space in scraped data then you can easily trim this left and right spaces by just executing “strip_column” template, see the code of that template below.

'''Strip all data of a column in data table
Remove the blank of data in the head and the tail.
'''

tabName = '[%table1|data table%]'
colName = '[%table1.column1|table column for strip%]'

tab = tables[tabName]
for i, row in enumerate(tab):
    row[colName] = row[colName].strip()   
    tab.edit_row(i, row)

This template comes with Fminer and few other template like “merge_tables_with_same_columns”.  Below are the steps how you can execute template python code on scraped data.

Step 1: Click on second icon from right that says “Run Code” under the Data section

Step 2: One popup will appear, you need to click on “Templates” icon and choose the template you want to execute and then click on Ok.

Step 3: Now the window will appear for configuration that will ask you to choose the table and column under that table on which you want to execute the code. Now click on Ok again.

Step 4: Now you can see the code of that template, now you can click on execute icon and script will start running, based on number of records it will take time to finish execution.

In many web scraping projects I found this template code very handy for cleaning data and making life easy. Templates are stored at following path so you can create your own template with customized code.

C:\Program Files (x86)\FMiner\templates

I have created one template which I use to remove HTML code that comes while scraping badly organized HTML pages. Below is the code of template for stripping html:

'''Strip HTML will remove all html tags of a column in data table.
'''
import re
tabName = '[%table1|data table%]'
colName = '[%table1.column1|table column for substring%]'
colNew = '[%table1.column1|table column to add new data%]'
tab = tables[tabName]
for i, row in enumerate(tab):
    cleanr =re.compile('<.*?>')
    cleantext = re.sub(cleanr,'', row[colName])
    row[colNew] = cleantext 
    tab.edit_row(i, row)

Stay connected as I am going to post more code templates that will make your web scraping life easy and manipulate data on fly.

Source: http://webdata-scraping.com/run-code-template-new-feature-added-fminer-web-scraping-tool/

Monday, 12 September 2016

Benefits of Ruby over Python & R for Web Scraping

Benefits of Ruby over Python & R for Web Scraping

In this data driven world, you need to be constantly vigilant, as information and key data for an organization keeps changing all the while. If you get the right data at the right time in an efficient manner, you can stay ahead of competition. Hence, web scraping is an essential way of getting the right data. This data is crucial for many organizations, and scraping technique will help them keep an eye on the data and get the information that will benefit them further.

Web scraping involves both crawling the web for data and extracting the data from the page. There are several languages which programmers prefer for web scraping, the top ones are Ruby, Python & R. Each language has its own pros and cons over the other, but if you want the best results and a smooth flow, Ruby is what you should be looking for.

Ruby is very good at production deployments and using Ruby, Redis & Chef have proven to be a great combination. String manipulation in Ruby is very easy because it is based on Perl syntax. Also, Ruby is great for analyzing web pages using  one of the very powerful gems called Nokogiri. Nokogiri is much easier to use as compared to other packages and libraries used by R and Python respectively. Nokogiri can deal with broken HTML / HTML fragments easily. Ruby also has many extensions, such as Sanitize and Loofah, that can help clean up broken HTML.

Python programmers widely use a library called Beautiful Soup for pulling data out of HTML & XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work. R programmers have a new package called rvest that makes it easy to scrape data from html web pages, by libraries like beautiful soup. It is designed to work with magrittr so that you can express complex operations as elegant pipelines composed of simple, easily understood pieces.

To help you understand it more effectively, below is a comprehensive infographic for the same.

Ruby is far ahead of Python & R for cloud development and deployments.  The Ruby Bundler system is just great for managing and deploying packages from Github. Using Chef, you can start up and tear down nodes on EC2, at will, and monitor for failures,  scale up or down, reset your IP addresses, etc. Ruby also has great testing frameworks like Fakeweb and Capybara, making it almost trivial to build a great suite of unit tests and to include advanced features, like crawling  and scraping using webkit / selenium. 

The only disadvantage to Ruby is lack of machine learning and NLP toolkits, making it much harder to emulate the capacity of a tool like Pattern.  It can still be done, however, since most of the heavy lifting can be done asynchronously using Unix tools like liblinear or vowpal wabbit.

Conclusion

Each language has its plus point and you can pick the one which you are most comfortable with. But if you are looking for smooth web scraping experience, then Ruby is the best option. That has been our choice too for years at PromptCloud for the best web scraping results. If you have any further questions about this, then feel free to get in touch with us.

Source: https://www.promptcloud.com/blog/benefits-of-ruby-for-web-scraping

Thursday, 1 September 2016

Why is a Web scraping service better than Scraping tools

Why is a Web scraping service better than Scraping tools

Web scraping has been making ripples across various industries in the last few years. Newer businesses can employ web scraping to gain quick market insights and equip themselves to take on their competitors. This works like clockwork if you know how to do the analysis right. Before we jump into that, there is the technical aspect of web scraping. Should your company use a scraping tool to get the required data from the web? Although this sounds like an easy solution, there is more to it than what meets the eye. We explain why it’s better to go with a dedicated web scraping service to cover your data acquisition needs rather than going by the scraping tool route.

Cost is lowered

Although this might come as a surprise, the cost of getting data from employing a data scraping tool along with an IT personnel who can get it done would exceed the cost of a good subscription based web scraping service. Not every company has the necessary resources needed to run web scraping in-house. By depending on a Data service provider, you will save the cost of software, resources and labour required to run web crawling in the firm. Besides, you will also end up having more time and less worries. More of your time and effort can therefore go into the analysis part which is crucial to you as a business owner.

Accessibility is high with a service

Multifaceted websites make it difficult for the scraping tools to extract data. A good web scraping service on the other hand can easily deal with bottlenecks in the scraping process when it may arise. Websites to be scraped often undergo changes in their structure which calls for modification of the crawler accordingly. Unlike a scraping tool, a dedicated service will be able to extract data from complex sites that use Ajax, Javascript and the like. By going with a subscription based service, you are doing yourself the favour of not being involved in this constant headache.

Accuracy in results

A DIY scraping tool might be able to get you data, but the accuracy and relevance of the acquired data will vary. You might be able to get it right with a particular website, but that might not be the case with another. This gives uncertainty to the results of your data acquisition and could even be disastrous for your business. On the other hand, a good scraping service will give you highly refined data which is in a ready to consume form.

Outcomes are instant with a service

Considering the high resource requirements of the web scraping process, your scraping tool is likely to be much slower than a reputed service that has got the right infrastructure and resources to scrape data from the web efficiently. It might not be feasible for your firm to acquire and manage the same setup since that could affect the focus of your business.

Tidying up of Data is an exhausting process

Web scrapers collect data into a dump file which would be huge in size. You will have to do a lot of tidying up in this to get data in a usable format. With the scraping tools route, you would be looking for more tools to clean up the data collected. This is a waste of time and effort that you could use in much better aspects of your business. Whereas with a web scraping service, you won’t have to worry about cleaning up of the data as it comes with the service. You get the data in a plug and use format which gives you more time to do better things.

Many sites have policies for data scraping

Sometimes, websites that you want to scrape data from might have policies discouraging the act. You wouldn’t want to act against their policies being ignorant of their existence and get into legal trouble. With a web scraping service, you don’t have to worry about these. A well-established data scraping provider will definitely follow the rules and policies set by the website. This would mean you can be relieved of such worries and go ahead with finding trends and ideas from the data that they provide.

More time to analyse the data

This is so far the best advantage of going with a scraping service rather than a tool. Since all the things related to data acquisition is dealt by the scraping service provider, you would have more time for analysing and deriving useful business decisions from this data. Being the business owner, analysing the data with care should be your highest priority. Since using a scraping tool to acquire data will cost you more time and effort, the analysis part is definitely going to suffer which defies your whole purpose.

Bottom line

It is up to you to choose between a web scraping tool and a dedicated scraping service. Being the business owner, it i s much better for you to stay away from the technical aspects of web scraping and focus on deriving a better business strategy from the data. When you have made up your mind to go with a data scraping service, it is important to choose the right web scraping service for maximum benefits.

Source: https://www.promptcloud.com/blog/web-scraping-services-better-than-scraping-tools

Wednesday, 24 August 2016

Three Common Methods For Web Data Extraction

Three Common Methods For Web Data Extraction

Probably the most common technique used traditionally to extract data from web pages this is to cook up some regular expressions that match the pieces you want (e.g., URL's and link titles). Our screen-scraper software actually started out as an application written in Perl for this very reason. In addition to regular expressions, you might also use some code written in something like Java or Active Server Pages to parse out larger chunks of text. Using raw regular expressions to pull out the data can be a little intimidating to the uninitiated, and can get a bit messy when a script contains a lot of them. At the same time, if you're already familiar with regular expressions, and your scraping project is relatively small, they can be a great solution.

Other techniques for getting the data out can get very sophisticated as algorithms that make use of artificial intelligence and such are applied to the page. Some programs will actually analyze the semantic content of an HTML page, then intelligently pull out the pieces that are of interest. Still other approaches deal with developing "ontologies", or hierarchical vocabularies intended to represent the content domain.

There are a number of companies (including our own) that offer commercial applications specifically intended to do screen-scraping. The applications vary quite a bit, but for medium to large-sized projects they're often a good solution. Each one will have its own learning curve, so you should plan on taking time to learn the ins and outs of a new application. Especially if you plan on doing a fair amount of screen-scraping it's probably a good idea to at least shop around for a screen-scraping application, as it will likely save you time and money in the long run.

So what's the best approach to data extraction? It really depends on what your needs are, and what resources you have at your disposal. Here are some of the pros and cons of the various approaches, as well as suggestions on when you might use each one:

Raw regular expressions and code

Advantages:

- If you're already familiar with regular expressions and at least one programming language, this can be a quick solution.

- Regular expressions allow for a fair amount of "fuzziness" in the matching such that minor changes to the content won't break them.

- You likely don't need to learn any new languages or tools (again, assuming you're already familiar with regular expressions and a programming language).

- Regular expressions are supported in almost all modern programming languages. Heck, even VBScript has a regular expression engine. It's also nice because the various regular expression implementations don't vary too significantly in their syntax.

Disadvantages:

- They can be complex for those that don't have a lot of experience with them. Learning regular expressions isn't like going from Perl to Java. It's more like going from Perl to XSLT, where you have to wrap your mind around a completely different way of viewing the problem.

- They're often confusing to analyze. Take a look through some of the regular expressions people have created to match something as simple as an email address and you'll see what I mean.

- If the content you're trying to match changes (e.g., they change the web page by adding a new "font" tag) you'll likely need to update your regular expressions to account for the change.

- The data discovery portion of the process (traversing various web pages to get to the page containing the data you want) will still need to be handled, and can get fairly complex if you need to deal with cookies and such.

When to use this approach: You'll most likely use straight regular expressions in screen-scraping when you have a small job you want to get done quickly. Especially if you already know regular expressions, there's no sense in getting into other tools if all you need to do is pull some news headlines off of a site.

Ontologies and artificial intelligence

Advantages:

- You create it once and it can more or less extract the data from any page within the content domain you're targeting.

- The data model is generally built in. For example, if you're extracting data about cars from web sites the extraction engine already knows what the make, model, and price are, so it can easily map them to existing data structures (e.g., insert the data into the correct locations in your database).

- There is relatively little long-term maintenance required. As web sites change you likely will need to do very little to your extraction engine in order to account for the changes.

Disadvantages:

- It's relatively complex to create and work with such an engine. The level of expertise required to even understand an extraction engine that uses artificial intelligence and ontologies is much higher than what is required to deal with regular expressions.

- These types of engines are expensive to build. There are commercial offerings that will give you the basis for doing this type of data extraction, but you still need to configure them to work with the specific content domain you're targeting.

- You still have to deal with the data discovery portion of the process, which may not fit as well with this approach (meaning you may have to create an entirely separate engine to handle data discovery). Data discovery is the process of crawling web sites such that you arrive at the pages where you want to extract data.

When to use this approach: Typically you'll only get into ontologies and artificial intelligence when you're planning on extracting information from a very large number of sources. It also makes sense to do this when the data you're trying to extract is in a very unstructured format (e.g., newspaper classified ads). In cases where the data is very structured (meaning there are clear labels identifying the various data fields), it may make more sense to go with regular expressions or a screen-scraping application.

Screen-scraping software

Advantages:

- Abstracts most of the complicated stuff away. You can do some pretty sophisticated things in most screen-scraping applications without knowing anything about regular expressions, HTTP, or cookies.

- Dramatically reduces the amount of time required to set up a site to be scraped. Once you learn a particular screen-scraping application the amount of time it requires to scrape sites vs. other methods is significantly lowered.

- Support from a commercial company. If you run into trouble while using a commercial screen-scraping application, chances are there are support forums and help lines where you can get assistance.

Disadvantages:

- The learning curve. Each screen-scraping application has its own way of going about things. This may imply learning a new scripting language in addition to familiarizing yourself with how the core application works.

- A potential cost. Most ready-to-go screen-scraping applications are commercial, so you'll likely be paying in dollars as well as time for this solution.

- A proprietary approach. Any time you use a proprietary application to solve a computing problem (and proprietary is obviously a matter of degree) you're locking yourself into using that approach. This may or may not be a big deal, but you should at least consider how well the application you're using will integrate with other software applications you currently have. For example, once the screen-scraping application has extracted the data how easy is it for you to get to that data from your own code?

When to use this approach: Screen-scraping applications vary widely in their ease-of-use, price, and suitability to tackle a broad range of scenarios. Chances are, though, that if you don't mind paying a bit, you can save yourself a significant amount of time by using one. If you're doing a quick scrape of a single page you can use just about any language with regular expressions. If you want to extract data from hundreds of web sites that are all formatted differently you're probably better off investing in a complex system that uses ontologies and/or artificial intelligence. For just about everything else, though, you may want to consider investing in an application specifically designed for screen-scraping.

As an aside, I thought I should also mention a recent project we've been involved with that has actually required a hybrid approach of two of the aforementioned methods. We're currently working on a project that deals with extracting newspaper classified ads. The data in classifieds is about as unstructured as you can get. For example, in a real estate ad the term "number of bedrooms" can be written about 25 different ways. The data extraction portion of the process is one that lends itself well to an ontologies-based approach, which is what we've done. However, we still had to handle the data discovery portion. We decided to use screen-scraper for that, and it's handling it just great. The basic process is that screen-scraper traverses the various pages of the site, pulling out raw chunks of data that constitute the classified ads. These ads then get passed to code we've written that uses ontologies in order to extract out the individual pieces we're after. Once the data has been extracted we then insert it into a database.

Source: http://ezinearticles.com/?Three-Common-Methods-For-Web-Data-Extraction&id=165416

Friday, 12 August 2016

Getting Data from the Web

Getting Data from the Web

You’ve tried everything else, and you haven’t managed to get your hands on the data you want. You’ve found the data on the web, but, alas — no download options are available and copy-paste has failed you. Fear not, there may still be a way to get the data out. For example you can:

Get data from web-based APIs, such as interfaces provided by online databases and many modern web applications (including Twitter, Facebook and many others). This is a fantastic way to access government or commercial data, as well as data from social media sites.

Extract data from PDFs. This is very difficult, as PDF is a language for printers and does not retain much information on the structure of the data that is displayed within a document. Extracting information from PDFs is beyond the scope of this book, but there are some tools and tutorials that may help you do it.

Screen scrape web sites. During screen scraping, you’re extracting structured content from a normal web page with the help of a scraping utility or by writing a small piece of code. While this method is very powerful and can be used in many places, it requires a bit of understanding about how the web works.

With all those great technical options, don’t forget the simple options: often it is worth to spend some time searching for a file with machine-readable data or to call the institution which is holding the data you want.

In this chapter we walk through a very basic example of scraping data from an HTML web page.
What is machine-readable data?

The goal for most of these methods is to get access to machine-readable data. Machine readable data is created for processing by a computer, instead of the presentation to a human user. The structure of such data relates to contained information, and not the way it is displayed eventually. Examples of easily machine-readable formats include CSV, XML, JSON and Excel files, while formats like Word documents, HTML pages and PDF files are more concerned with the visual layout of the information. PDF for example is a language which talks directly to your printer, it’s concerned with position of lines and dots on a page, rather than distinguishable characters.
Scraping web sites: what for?

Everyone has done this: you go to a web site, see an interesting table and try to copy it over to Excel so you can add some numbers up or store it for later. Yet this often does not really work, or the information you want is spread across a large number of web sites. Copying by hand can quickly become very tedious, so it makes sense to use a bit of code to do it.

The advantage of scraping is that you can do it with virtually any web site — from weather forecasts to government spending, even if that site does not have an API for raw data access.
What you can and cannot scrape

There are, of course, limits to what can be scraped. Some factors that make it harder to scrape a site include:

Badly formatted HTML code with little or no structural information e.g. older government websites.

Authentication systems that are supposed to prevent automatic access e.g. CAPTCHA codes and paywalls.

Session-based systems that use browser cookies to keep track of what the user has been doing.

A lack of complete item listings and possibilities for wildcard search.

Blocking of bulk access by the server administrators.

Another set of limitations are legal barriers: some countries recognize database rights, which may limit your right to re-use information that has been published online. Sometimes, you can choose to ignore the license and do it anyway — depending on your jurisdiction, you may have special rights as a journalist. Scraping freely available Government data should be fine, but you may wish to double check before you publish. Commercial organizations — and certain NGOs — react with less tolerance and may try to claim that you’re “sabotaging” their systems. Other information may infringe the privacy of individuals and thereby violate data privacy laws or professional ethics.
Tools that help you scrape

There are many programs that can be used to extract bulk information from a web site, including browser extensions and some web services. Depending on your browser, tools like Readability (which helps extract text from a page) or DownThemAll (which allows you to download many files at once) will help you automate some tedious tasks, while Chrome’s Scraper extension was explicitly built to extract tables from web sites. Developer extensions like FireBug (for Firefox, the same thing is already included in Chrome, Safari and IE) let you track exactly how a web site is structured and what communications happen between your browser and the server.

ScraperWiki is a web site that allows you to code scrapers in a number of different programming languages, including Python, Ruby and PHP. If you want to get started with scraping without the hassle of setting up a programming environment on your computer, this is the way to go. Other web services, such as Google Spreadsheets and Yahoo! Pipes also allow you to perform some extraction from other web sites.
How does a web scraper work?

Web scrapers are usually small pieces of code written in a programming language such as Python, Ruby or PHP. Choosing the right language is largely a question of which community you have access to: if there is someone in your newsroom or city already working with one of these languages, then it makes sense to adopt the same language.

While some of the click-and-point scraping tools mentioned before may be helpful to get started, the real complexity involved in scraping a web site is in addressing the right pages and the right elements within these pages to extract the desired information. These tasks aren’t about programming, but understanding the structure of the web site and database.

When displaying a web site, your browser will almost always make use of two technologies: HTTP is a way for it to communicate with the server and to request specific resource, such as documents, images or videos. HTML is the language in which web sites are composed.
The anatomy of a web page

Any HTML page is structured as a hierarchy of boxes (which are defined by HTML “tags”). A large box will contain many smaller ones — for example a table that has many smaller divisions: rows and cells. There are many types of tags that perform different functions — some produce boxes, others tables, images or links. Tags can also have additional properties (e.g. they can be unique identifiers) and can belong to groups called ‘classes’, which makes it possible to target and capture individual elements within a document. Selecting the appropriate elements this way and extracting their content is the key to writing a scraper.

Viewing the elements in a web page: everything can be broken up into boxes within boxes.

To scrape web pages, you’ll need to learn a bit about the different types of elements that can be in an HTML document. For example, the <table> element wraps a whole table, which has <tr> (table row) elements for its rows, which in turn contain <td> (table data) for each cell. The most common element type you will encounter is <div>, which can basically mean any block of content. The easiest way to get a feel for these elements is by using the developer toolbar in your browser: they will allow you to hover over any part of a web page and see what the underlying code is.

Tags work like book ends, marking the start and the end of a unit. For example <em> signifies the start of an italicized or emphasized piece of text and </em> signifies the end of that section. Easy.

An example: scraping nuclear incidents with Python

NEWS is the International Atomic Energy Agency’s (IAEA) portal on world-wide radiation incidents (and a strong contender for membership in the Weird Title Club!). The web page lists incidents in a simple, blog-like site that can be easily scraped.

To start, create a new Python scraper on ScraperWiki and you will be presented with a text area that is mostly empty, except for some scaffolding code. In another browser window, open the IAEA site and open the developer toolbar in your browser. In the “Elements” view, try to find the HTML element for one of the news item titles. Your browser’s developer toolbar helps you connect elements on the web page with the underlying HTML code.

Investigating this page will reveal that the titles are <h4> elements within a <table>. Each event is a <tr> row, which also contains a description and a date. If we want to extract the titles of all events, we should find a way to select each row in the table sequentially, while fetching all the text within the title elements.

In order to turn this process into code, we need to make ourselves aware of all the steps involved. To get a feeling for the kind of steps required, let’s play a simple game: In your ScraperWiki window, try to write up individual instructions for yourself, for each thing you are going to do while writing this scraper, like steps in a recipe (prefix each line with a hash sign to tell Python that this not real computer code). For example:

  # Look for all rows in the table
  # Unicorn must not overflow on left side.

Try to be as precise as you can and don’t assume that the program knows anything about the page you’re attempting to scrape.

Once you’ve written down some pseudo-code, let’s compare this to the essential code for our first scraper:

  import scraperwiki
  from lxml import html

In this first section, we’re importing existing functionality from libraries — snippets of pre-written code. scraperwiki will give us the ability to download web sites, while lxml is a tool for the structured analysis of HTML documents. Good news: if you are writing a Python scraper with ScraperWiki, these two lines will always be the same.

  url = "http://www-news.iaea.org/EventList.aspx"
  doc_text = scraperwiki.scrape(url)
  doc = html.fromstring(doc_text)

Next, the code makes a name (variable): url, and assigns the URL of the IAEA page as its value. This tells the scraper that this thing exists and we want to pay attention to it. Note that the URL itself is in quotes as it is not part of the program code but a string, a sequence of characters.

We then use the url variable as input to a function, scraperwiki.scrape. A function will provide some defined job — in this case it’ll download a web page. When it’s finished, it’ll assign its output to another variable, doc_text. doc_text will now hold the actual text of the website — not the visual form you see in your browser, but the source code, including all the tags. Since this form is not very easy to parse, we’ll use another function, html.fromstring, to generate a special representation where we can easily address elements, the so-called document object model (DOM).

  for row in doc.cssselect("#tblEvents tr"):
  link_in_header = row.cssselect("h4 a").pop()
  event_title = link_in_header.text
  print event_title

In this final step, we use the DOM to find each row in our table and extract the event’s title from its header. Two new concepts are used: the for loop and element selection (.cssselect). The for loop essentially does what its name implies; it will traverse a list of items, assigning each a temporary alias (row in this case) and then run any indented instructions for each item.

The other new concept, element selection, is making use of a special language to find elements in the document. CSS selectors are normally used to add layout information to HTML elements and can be used to precisely pick an element out of a page. In this case (Line. 6) we’re selecting #tblEvents tr which will match each <tr> within the table element with the ID tblEvents (the hash simply signifies ID). Note that this will return a list of <tr> elements.

As can be seen on the next line (Line. 7), where we’re applying another selector to find any <a> (which is a hyperlink) within a <h4> (a title). Here we only want to look at a single element (there’s just one title per row), so we have to pop it off the top of the list returned by our selector with the .pop() function.

Note that some elements in the DOM contain actual text, i.e. text that is not part of any markup language, which we can access using the [element].text syntax seen on line 8. Finally, in line 9, we’re printing that text to the ScraperWiki console. If you hit run in your scraper, the smaller window should now start listing the event’s names from the IAEA web site.

  figs/incoming/04-DD.png
  Figure 58. A scraper in action (ScraperWiki)

You can now see a basic scraper operating: it downloads the web page, transforms it into the DOM form and then allows you to pick and extract certain content. Given this skeleton, you can try and solve some of the remaining problems using the ScraperWiki and Python documentation:

Can you find the address for the link in each event’s title?

Can you select the small box that contains the date and place by using its CSS class name and extract the element’s text?

ScraperWiki offers a small database to each scraper so you can store the results; copy the relevant example from their docs and adapt it so it will save the event titles, links and dates.

The event list has many pages; can you scrape multiple pages to get historic events as well?

As you’re trying to solve these challenges, have a look around ScraperWiki: there are many useful examples in the existing scrapers — and quite often, the data is pretty exciting, too. This way, you don’t need to start off your scraper from scratch: just choose one that is similar, fork it and adapt to your problem.

Source: http://datajournalismhandbook.org/1.0/en/getting_data_3.html

Friday, 5 August 2016

Are You Screen Scraping or Data Mining?

Are You Screen Scraping or Data Mining?

Many of us seem to use these terms interchangeably but let’s make sure we are clear about the differences that make each of these approaches different from the other.

Basically, screen scraping is a process where you use a computer program or software to extract information from a website.  This is different than crawling, searching or mining a site because you are not indexing everything on the page – a screen scraper simply extracts precise information selected by the user.  Screen scraping is a useful application when you want to do real-time, price and product comparisons, archive web pages, or acquire data sets that you want to evaluate or filter.

When you perform screen scraping, you are able to scrape data more directly and, you can automate the process if you are using the right solution. Different types of screen scraping services and solutions offer different ways of obtaining information. Some look directly at the html code of the webpage to grab the data while others use more advanced, visual abstraction techniques that can often avoid “breakage” errors when the web source experiences a programming or code change.

On the other hand, data mining is basically the process of automatically searching large amounts of information and data for patterns. This means that you already have the information and what you really need to do is analyze the contents to find the useful things you need. This is very different from screen scraping as screen scraping requires you to look for the data, collect it and then you can analyze it.

Data mining also involves a lot of complicated algorithms often based on various statistical methods. This process has nothing to do with how you obtain the data. All it cares about is analyzing what is available for evaluation.

Screen scraping is often mistaken for data mining when, in fact, these are two different things. Today, there are online services that offer screen scraping. Depending on what you need, you can have it custom tailored to meet your specific needs and perform precisely the tasks you want. But screen scraping does not guarantee any kind of analysis of the data.

Source: http://www.connotate.com/are-you-screen-scraping-or-data-mining/

Tuesday, 2 August 2016

Scraping data from LinkedIn

Scraping data from LinkedIn

How to scrape data from LinkedIn public profile for marketing purposes?

You can scrape data from a LinkedIn public profile using data scraper software. LinkedIn data extraction is most beneficial for marketers and most medium size companies rely on LinkedIn for their marketing purpose.

I would recommend you to use "LinkedIn Lead Extractor" software, which helps to quickly scrape public profiles from LinkedIn. With this tool your can scrape profile link, First Name, Last Name, Email, Phone Address, Twitter id, Yahoo messenger id, Skype Id, Google Talk ID, Job Role, Company Name, Address, Country, Connections. This company has built this tool specially for LinkedIn marketers who are not satisfied with their drop ship supplier's digital data.

LinkedIn advance search provides you the targeted customers profiles list with your requirements like country, country, city, company, job title, and much more.

In few weeks you can developed new ways to set-up differently the sales teams and create a much more technologic environment in the strategy department. An internal platform that generated targeted leads can be of a very big help. You can easily execute go to market to any area or city in so much little time compared with some years ago.

Source: http://www.ahmadsoftware.com/blogs/4/scraping-data-from-linkedin.html

Tuesday, 12 July 2016

Python 3 web-scraping examples with public data

Someone on the NICAR-L listserv asked for advice on the best Python libraries for web scraping. My advice below includes what I did for last spring’s Computational Journalism class, specifically, the Search-Script-Scrape project, which involved 101-web-scraping exercises in Python.

Best Python libraries for web scraping

For the remainder of this post, I assume you’re using Python 3.x, though the code examples will be virtually the same for 2.x. For my class last year, I had everyone install the Anaconda Python distribution, which comes with all the libraries needed to complete the Search-Script-Scrape exercises, including the ones mentioned specifically below:
The best package for general web requests, such as downloading a file or submitting a POST request to a form, is the simply-named requests library (“HTTP for Humans”).

Here’s an overly verbose example:

import requests
base_url = 'http://maps.googleapis.com/maps/api/geocode/json'
my_params = {'address': '100 Broadway, New York, NY, U.S.A',
             'language': 'ca'}
response = requests.get(base_url, params = my_params)
results = response.json()['results']
x_geo = results[0]['geometry']['location']
print(x_geo['lng'], x_geo['lat'])
# -74.01110299999999 40.7079445

For the parsing of HTML and XML, Beautiful Soup 4 seems to be the most frequently recommended. I never got around to using it because it was malfunctioning on my particular installation of Anaconda on OS X.
But I’ve found lxml to be perfectly fine. I believe both lxml and bs4 have similar capabilities – you can even specify lxml to be the parser for bs4. I think bs4 might have a friendlier syntax, but again, I don’t know, as I’ve gotten by with lxml just fine:

import requests
from lxml import html
page = requests.get("http://www.example.com").text
doc = html.fromstring(page)
link = doc.cssselect("a")[0]
print(link.text_content())
# More information...
print(link.attrib['href'])
# http://www.iana.org/domains/example

The standard urllib package also has a lot of useful utilities – I frequently use the methods from urllib.parse. Python 2 also has urllib but the methods are arranged differently.

Here’s an example of using the urljoin method to resolve the relative links on the California state data for high school test scores. The use of os.path.basename is simply for saving the each spreadsheet to your local hard drive:

from os.path import basename
from urllib.parse import urljoin
from lxml import html
import requests
base_url = 'http://www.cde.ca.gov/ds/sp/ai/'
page = requests.get(base_url).text
doc = html.fromstring(page)
hrefs = [a.attrib['href'] for a in doc.cssselect('a')]
xls_hrefs = [href for href in hrefs if 'xls' in href]
for href in xls_hrefs:
  print(href) # e.g. documents/sat02.xls
  url = urljoin(base_url, href)
  with open("/tmp/" + basename(url), 'wb') as f:
    print("Downloading", url)
    # Downloading http://www.cde.ca.gov/ds/sp/ai/documents/sat02.xls
    data = requests.get(url).content
    f.write(data)

And that’s about all you need for the majority of web-scraping work – at least the part that involves reading HTML and downloading files.
Examples of sites to scrape

The 101 scraping exercises didn’t go so great, as I didn’t give enough specifics about what the exact answers should be (e.g. round the numbers? Use complete sentences?) or even where the data files actually were – as it so happens, not everyone Googles things the same way I do. And I should’ve made them do it on a weekly basis, rather than waiting till the end of the quarter to try to cram them in before finals week.

The Github repo lists each exercise with the solution code, the relevant URL, and the number of lines in the solution code.

The exercises run the gamut of simple parsing of static HTML, to inspecting AJAX-heavy sites in which knowledge of the network panel is required to discover the JSON files to grab. In many of these exercises, the HTML-parsing is the trivial part – just a few lines to parse the HTML to dynamically find the URL for the zip or Excel file to download (via requests)…and then 40 to 50 lines of unzipping/reading/filtering to get the answer. That part is beyond what typically considered “web-scraping” and falls more into “data wrangling”.

I didn’t sort the exercises on the list by difficulty, and many of the solutions are not particulary great code. Sometimes I wrote the solution as if I were teaching it to a beginner. But other times I solved the problem using the style in the most randomly bizarre way relative to how I would normally solve it – hey, writing 100+ scrapers gets boring.

But here are a few representative exercises with some explanation:
1. Number of datasets currently listed on data.gov

I think data.gov actually has an API, but this script relies on finding the easiest tag to grab from the front page and extracting the text, i.e. the 186,569 from the text string, "186,569 datasets found". This is obviously not a very robust script, as it will break when data.gov is redesigned. But it serves as a quick and easy HTML-parsing example.
29. Number of days until Texas’s next scheduled execution

Texas’s death penalty site is probably one of the best places to practice web scraping, as the HTML is pretty straightforward on the main landing pages (there are several, for scheduled and past executions, and current inmate roster), which have enough interesting tabular data to collect. But you can make it more complex by traversing the links to collect inmate data, mugshots, and final words. This script just finds the first person on the scheduled list and does some math to print the number of days until the execution (I probably made the datetime handling more convoluted than it needs to be in the provided solution)
3. The number of people who visited a U.S. government website using Internet Explorer 6.0 in the last 90 days

The analytics.usa.gov site is a great place to practice AJAX-data scraping. It’s a very simple and robust site, but either you are aware of AJAX and know how to use the network panel (and in this case, locate ie.json, or you will have no clue how to scrape even a single number on this webpage. I think the difference between static HTML and AJAX sites is one of the tougher things to teach novices. But they pretty much have to learn the difference given how many of today’s websites use both static and dynamically-rendered pages.
6. From 2010 to 2013, the change in median cost of health, dental, and vision coverage for California city employees

There’s actually no HTML parsing if you assume the URLs for the data files can be hard coded. So besides the nominal use of the requests library, this ends up being a data-wrangling exercise: download two specific zip files, unzip them, read the CSV files, filter the dictionaries, then do some math.
90. The currently serving U.S. congressmember with the most Twitter followers

Another example with no HTML parsing, but probably the most complicated example. You have to download and parse Sunlight Foundation’s CSV of Congressmember data to get all the Twitter usernames. Then authenticate with Twitter’s API, then perform mulitple batch lookups to get the data for all 500+ of the Congressional Twitter usernames. Then join the sorted result with the actual Congressmember identity. I probably shouldn’t have assigned this one.
HTML is not necessary

I included no-HTML exercises because there are plenty of data programming exercises that don’t have to deal with the specific nitty-gritty of the Web, such as understanding HTTP and/or HTML. It’s not just that a lot of public data has moved to JSON (e.g. the FEC API) – but that much of the best public data is found in bulk CSV and database files. These files can be programmatically fetched with simple usage of the requests library.

It’s not that parsing HTML isn’t a whole boatload of fun – and being able to do so is a useful skill if you want to build websites. But I believe novices have more than enough to learn from in sorting/filtering dictionaries and lists without worrying about learning how a website works.

Besides analytics.usa.gov, the data.usajobs.gov API, which lists federal job openings, is a great one to explore, because its data structure is simple and the site is robust. Here’s a Python exercise with the USAJobs API; and here’s one in Bash.

There’s also the Google Maps geocoding API, which can be hit up for a bit before you run into rate limits, and you get the bonus of teaching geocoding concepts. The NYTimes API requires creating an account, but you not only get good APIs for some political data, but for content data (i.e. articles, bestselling books) that is interesting fodder for journalism-related analysis.

But if you want to scrape HTML, then the Texas death penalty pages are the way to go, because of the simplicity of the HTML and the numerous ways you can traverse the pages and collect interesting data points. Besides the previously mentioned Texas Python scraping exercise, here’s one for Florida’s list of executions. And here’s a Bash exercise that scrapes data from Texas, Florida, and California and does a simple demographic analysis.

If you want more interesting public datasets – most of which require only a minimal of HTML-parsing to fetch – check out the list I talked about in last week’s info session on Stanford’s Computational Journalism Lab.

Source URL :  http://blog.danwin.com/examples-of-web-scraping-in-python-3-x-for-data-journalists/

Monday, 11 July 2016

Web Data Scraping: Practical Uses

Whether in the form of media, text or data in diverse other formats—the internet serves to be a huge storehouse of the world’s information. While browsing for commercial or business needs alike, users are exposed to numerous web pages that contain data in just about every form. Even though access to such data is extremely critical for garnering success in the contemporary world, unfortunately most of it is not open. More often than not, business websites restrict the accessibility options to such data and do not allow visitors to save or display them for reuse on their local storage devices, or onto their own websites.  This is where web data extraction tools come in handy.

Read on for a closer look into some of the common areas of data scraping usage.

• Gathering of data from diverse sources for analysis: In case a business necessitates the collection and analysis of data specific to certain categories from multiple websites, then it helps refer to web data integration experts or those related to the field of data scraping linked with categories like industrial equipment, real estate, automobiles, marketing, business contacts, electronic gadgets and so forth.

• Collection of data in different formats: Different websites are known to publish information and structured data in different formats. So, it may not be possible for organizations to see all the required data a one place, at any given time. Data scrapers allow the extraction of information spanning across multiple pages under various sections, on to a single database or spreadsheet.  This makes it easy for users to analyze (or visualize) the data.

• Helps Research: Data is an important and integral part of all kinds of research – marketing, academic or scientific. A data scraper helps in gathering structured data with ease.

• Market analysis for businesses: Companies that cater to products or services connected to specific domains require comprehensive data of products and services that are of similar kind, and which have a tendency of appearing in the market on a daily basis.

Web scraping software solutions from reputed companies are successful in keeping a constant watch on this kind of data and allow users to get access required information from diverse sources – all at the click of a button.
Go for data extraction to take your business to the next levels of success – you will not be disappointed.

Source URL : http://www.3idatascraping.com/web-data-scraping-practical-uses.php

Friday, 8 July 2016

ECJ clarifies Database Directive scope in screen scraping case

EC on the legal protection of databases (Database Directive) in a case concerning the extraction of data from a third party’s website by means of automated systems or software for commercial purposes (so called 'screen scraping').

Flight data extracted

The case, Ryanair Ltd vs. PR Aviation BV, C-30/14, is of interest to a range of companies such as price comparison websites. It stemmed from  Dutch company PR Aviation operation of a website where consumers can search through flight data of low-cost airlines  (including Ryanair), compare prices and, on payment of a commission, book a flight. The relevant flight data is extracted from third-parties’ websites by means of ‘screen scraping’ practices.

Ryanair claimed that PR Aviation’s activity:

• amounted to infringement of copyright (relating to the structure and architecture of the database) and of the so-called sui generis database right (i.e. the right granted to the ‘maker’ of the database where certain investments have been made to obtain, verify, or present the contents of a database) under the Netherlands law implementing the Database Directive;

• constituted breach of contract. In this respect, Ryanair claimed that a contract existed with PR Aviation for the use of its website. Access to the latter requires acceptance, by clicking a box, of the airline’s general terms and conditions which, amongst others, prohibit unauthorized ‘screen scraping’ practices for commercial purposes.

Ryanair asked Dutch courts to prohibit the infringement and order damages. In recent years the company has been engaged in several legal cases against web scrapers across Europe.

The Local Court, Utrecht, and the Court of Appeals of Amsterdam dismissed Ryanair’s claims on different grounds. The Court of Appeals, in particular, cited PR Aviation’s screen scraping of Ryanair’s website as amounting to a “normal use” of said website within the meaning of the lawful user exceptions under Sections 6 and 8 of the Database Directive, which cannot be derogated by contract (Section 15).

Ryanair appealed

Ryanair appealed the decision before the Netherlands Supreme Court (Hoge Raad der Nederlanden), which decided to refer the following question to the ECJ for a preliminary ruling: “Does the application of [Directive 96/9] also extend to online databases which are not protected by copyright on the basis of Chapter II of said directive or by a sui generis right on the basis of Chapter III, in the sense that the freedom to use such databases through the (whether or not analogous) application of Article[s] 6(1) and 8, in conjunction with Article 15 [of Directive 96/9] may not be limited contractually?.”

The ECJ’s ruling

The ECJ (without the need of the opinion of the advocate general) ruled that the Database Directive is not applicable to databases which are not protected either by copyright or by the sui generis database right. Therefore, exceptions to restricted acts set forth by Sections 6 and 8 of the Directive do not prevent the database owner from establishing contractual limitations on its use by third parties. In other words, restrictions to the freedom to contract set forth by the Database Directive do not apply in cases of unprotected databases. Whether Ryanair’s website may be entitled to copyright or sui generis database right protection needs to be determined by the competent national court.

The ECJ’s decision is not particularly striking from a legal standpoint. Yet, it could have a significant impact on the business model of price comparison websites, aggregators, and similar businesses. Owners of databases that could not rely on intellectual property protection may contractually prevent extraction and use (“scraping”) of content from their online databases. Thus, unprotected databases could receive greater protection than the one granted by IP law.

Antitrust implications

However, the lawfulness of contractual restrictions prohibiting access and reuse of data through screen scraping practices should be assessed under an antitrust perspective. In this respect, in 2013 the Court of Milan ruled that Ryanair’s refusal to grant access to its database to the online travel agency Viaggiare S.r.l. amounted to an abuse of dominant position in the downstream market of information and intermediation on flights (decision of June 4, 2013 Viaggiare S.r.l. vs Ryanair Ltd). Indeed, a balance should be struck between the need to compensate the efforts and investments made by the creator of the database with the interest of third parties to be granted with access to information (especially in those cases where the latter are not entitled to copyright protection).

Additionally, web scraping triggers other issues which have not been considered by the ECJ’s ruling. These include, but are not limited to trademark law (i.e., whether the use of a company’s names/logos by the web scraper without consent may amount to trademark infringement), data protection (e.g., in case the scraping involves personal data), or unfair competition.


Source URL :http://yellowpagesdatascraping.blogspot.in/2015/07/ecj-clarifies-database-directive-scope.html

Thursday, 30 June 2016

Increasing Accessibility by Scraping Information From PDF

You may have heard about data scraping which is a method that is being used by computer programs in extracting data from an output that comes from another program. To put it simply, this is a process which involves the automatic sorting of information that can be found on different resources including the internet which is inside an html file, PDF or any other documents. In addition to that, there is the collection of pertinent information. These pieces of information will be contained into the databases or spreadsheets so that the users can retrieve them later.

Most of the websites today have text that can be accessed and written easily in the source code. However, there are now other businesses nowadays that choose to make use of Adobe PDF files or Portable Document Format. This is a type of file that can be viewed by simply using the free software known as the Adobe Acrobat. Almost any operating system supports the said software. There are many advantages when you choose to utilize PDF files. Among them is that the document that you have looks exactly the same even if you put it in another computer so that you can view it. Therefore, this makes it ideal for business documents or even specification sheets. Of course there are disadvantages as well. One of which is that the text that is contained in the file is converted into an image. In this case, it is often that you may have problems with this when it comes to the copying and pasting.

This is why there are some that start scraping information from PDF. This is often called PDF scraping in which this is the process that is just like data scraping only that you will be getting information that is contained in your PDF files. In order for you to begin scraping information from PDF, you must choose and exploit a tool that is specifically designed for this process. However, you will find that it is not easy to locate the right tool that will enable you to perform PDF scraping effectively. This is because most of the tools today have problems in obtaining exactly the same data that you want without personalizing them.

Nevertheless, if you search well enough, you will be able to encounter the program that you are looking for. There is no need for you to have programming language knowledge in order for you to use them. You can easily specify your own preferences and the software will do the rest of the work for you. There are also companies out there that you can contact and they will perform the task since they have the right tools that they can use. If you choose to do things manually, you will find that this is indeed tedious and complicated whereas if you compare this to having professionals do the job for you, they will be able to finish it in no time at all. Scraping information from PDF is a process where you collect the information that can be found on the internet and this does not infringe copyright laws.

 Source  URL : http://ezinearticles.com/?Increasing-Accessibility-by-Scraping-Information-From-PDF&id=4593863

Saturday, 18 June 2016

Taking a cue from the Ryanair screen scraping judgment

Screen scraping is the best way to aggregate web data much faster than a human possibly can. However, is there any such thing known as ethical web scraping?

Well it’s not scraping in itself that is good or bad. What matters is how you use the scraped data. It would be extremely unethical to steal data, republish it or use it to cause harm to a business. This was clearly established when the EU passed a judgment in favour of Ryanair.

The EU’s highest court passed a judgment in favour of Ryan air, and this will positively prevent people from scraping and using others data in an unethical manner. Ryanair had claimed that PR Aviation scraped its data regarding the flight scheduled and then used it to allow people book Ryanair flights via the PR Aviation website and this is clearly an infringement of database rights.

Source:http://www.habiledata.com/blog/taking-cue-ryanair-screen-scraping-judgment/

Friday, 27 May 2016

Data Extraction Services

Are you finding it tedious to perform your routine tasks as well as finding time to research for some information? Don't worry; all you have to do is outsource data extraction requirements to reliable service providers such as Hi-Tech BPO Services.

We can assist you in finding, extracting, gathering, processing and validating all the required data through our effective data extraction services. We can extract data from any given source such as websites, databases, printed documents, directories, etc.

With a whole plethora of data extraction services solutions; we are definitely a one stop solution to all your data extraction services requirements.

For utilizing our data extraction services, all you have to do is outsource data extraction requirements to us, and we will create effective strategies and extract the required data from all preferred sources. Then we will arrange all the extracted data in a systematic order.

Types of data extraction services provided by our data extraction India unit:

The data extraction India unit of Hi-Tech BPO Services can attend to all types of outsource data extraction requirements. Following are just some of the data extraction services we have delivered:

•    Data extraction from websites
•    Data extraction from databases
•    Extraction of data from directories
•    Extracting data from books
•    Data extraction from forms
•    Extracting data from printed materials

Features of Our Data Extraction Services:

•    Reliable collection of resources for data extraction
•    Extensive range of data extraction services
•    Data can be extracted from any available source be it a digital source or a hard copy source
•    Proper researching, extraction, gathering, processing and validation of data
•    Reasonably priced data extraction services
•    Quality and confidentiality ensured through various strict measures

Our data extraction India unit has the competency to handle any of your data extraction services requirements. Just provide us with your specific requirements and we will extract data accordingly from your preferred resources, if particularly specified. Otherwise we will completely rely on our collection of resources for extracting data for you.

Source: http://www.hitechbposervices.com/data-extraction.php