Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

About LogEc


LogEc collects accesses statistics from several services which use the RePEc data set, the largest online collection of Economics Working Papers, Journal Articles and Software Components. LogEc provides a convenient way of tracking trends in the profession (it can be found in RePEc before it hits the journals) and the impact of your own work.

Please contact if you have any questions about this site or the statistics.

Contributing and publicizing your work

Publishers and Working Paper providers:
Open a RePEc archive to get your material listed in the RePEc services below and usage statistics at LogEc.

Authors:
Register yourself and your works in the RePEc Author Service. These registrations are the basis for the author pages.

If your organization is not contributing data to RePEc (it should!) you can still have your work included in RePEc and EconPapers by uploading it to the Munich Personal RePEc Archive (MPRA), a special RePEc archive that accepts personal contributions.
EconWPA, the Economics Working Paper Archive, used to play this role but is no longer accepting new papers.

The Sites

LogEc currently collects access statistics from the following sites and services providing access to the RePEc data set

  • Economists Online (statistics available from 2010-01 to 2013-12)
  • EconPapers (statistics available from 2001-07)
  • IDEAS (statistics available from 1998-05, downloads from 2000-10)
  • NetEc (statistics available from 1998-01 to 2005-03)
  • New Economic Papers (statistics available from 2002-06)
  • Socionet (statistics available from 2001-01)

The Statistics

The statistics are updated monthly around the third of each month when the server logs from the participating sites are collected and merged.

Producing meaningful statistics for accesses to web servers is a difficult task, especially so since we are merging data from several different sites. Rather than just counting the number of times a page or file is accessed (by a human or a piece of software indexing the web) the goal is to get as close as possible to a measure of the number of people showing an interest in a paper by reading the abstract page or downloading the full text file.

To accomplish this we

  1. Remove accesses by robots and spiders.
  2. Avoid double counting.
  3. Apply additional heuristics to remove accesses by automated processes and download abuses.

While not perfect the net effect of these steps is to produce statistics that closely approximates the number of actual persons viewing the abstract of a paper or download a full text file. Note that the counts for downloads only includes downloads that are made by clicking on a download link in a participating RePEc service.

Robots

Robots and spiders (the software that index the web) account for some 60% of the hits to these sites. The statistics would be completely misleading if we did not remove the accesses made by robots.

Robots are primarily identified by checking if a host has requested the /robots.txt file. Robots adhering to the Robot Exclusion Standard checks this file to see which parts of a web site they shouldn't index.

In addition an effort is made to identify robots that don't request /robots.txt and hosts that display robot like behavior. This is a new feature of the statistics introduced with the access statistics for November 2002, at the same time the historical data was revised to make them comparable. These "robot like" hosts are identified by looking for hosts or groups of hosts that have an excessive number of accesses. A host is declared a robot and the accesses excluded from the statistics if either

  • It accesses an excessive number of items
  • The C-class net it belongs to accesses and excessive number of items.
  • It belongs to a domain such as googlebot.com or inktomisearch.com known to contain robots.
  • And some additional heuristics that we would rather not divulge...

In July 2010 this identified about 600 robots with 791,000 abstract views and 5,600 downloads. This is in addition to the 4,300 robots that identified themselves by accessing robots.txt and accounted for over 24 million abstract views and about 600,000 downloads.
In July 2007 this identified about 1,600 robots in addition to the 7,900 robots that accessed robots.txt. The additional robots had over 1,260,000 abstract views and 1,500 full text downloads. In total robots accounted for over 4 million (74% of the total) abstract views and over 25,000 (7% of the total) full text downloads at the participating RePEc services.
In November 2002 this identified about 150 robots in addition to the 1400 robots which requested robots.txt. The additional robots had over 600,000 abstract views and over 15,000 downloads. Inspection of the data confirmed that these were indeed genuine robots or, in a few cases, people downloading large portions of a site for off-line viewing. Overall this additional step has reduced the cumulative count of abstract views by about 45% and has a very small impact on the full text downloads.

Robot activity

Clearly search engine robots account for a significant portion of the traffic at the RePEc sites and on the Internet in general. This is obviously driven by the desire to have as fresh an index and as broad a coverage as possible. Taking a look at how many requests there have been from different search engines gives an indication of how good they are in this respect.
Accesses from top robot domains
July 2007 July 2010
DomainAbstractsFull texts DomainAbstractsFull texts
inktomisearch.com1,240,7511,636 googlebot.com4,993,788257,386
ask.com935,2420 yahoo.net2,348,18813,767
googlebot.com878,78710,460 scoutjet.com1,698,9913,246
attens.net*137,4950 msn.com1,458,308936
msn.com101,2690 yandex.ru403,7348,553
exabot.com241,78315
baidu.com185,2802
ask.com171,2765
amazonaws.com128,19824
*This is a curious beast and does not seem to support a publicly available search engine. The user agent string is "ConveraCrawler/0.9d ( http://www.authoritativeweb.com/crawl)" but it is run from a net that belongs to AT&T. The URL gives no useful information.

Double counting

Double counting occurs when a person views an abstract page more than once or, perhaps being impatient, clicks on a download link more than once. In each case it would be misleading to count this as more than one abstract view or file download. To avoid this double counting we keep track of the originating IP-number of each access and count only one access to a specific resource for each IP-number.

The strategy for avoiding double counting introduces a slight undercount when, for example, several computers behind a firewall share the same external IP-number. By comparing with statistics obtained by identifying users with cookies rather than their IP-numbers we can estimate this under count to about 2% for abstract views and 1% for downloads.

Additional heuristics

Over time it has become clear that the simple filtering for robots and removal of double clicks discussed above is not enough. Many new practices has developed on the web, some for a good purpose, some for a more questionable purpose. There are spam-bots, referer spamming (a stupid idea if there ever was one), anti-malware software that checks links on a web page and warn users about dangerous links and much, much more that should not be counted. And, yes, there appears to be the occasional attempt to manipulate the statistics.

Starting from July 2010 we apply an additional set of heuristics to filter out these accesses. In conjunction with this we have also recalculated the statistics going back to January 2008. The overall effect is relatively small but there are substantial reductions in the number of accesses for a small number of papers.

In January 2017 a new type of systematic downloads that mainly affected papers included in the NEP current awareness service was detected. This has lead to a new set of heuristics that have also been applied to earlier months. Some papers will see a drastic reduction in the number of downloads or abstract views for certain months as a result of this.

January 2022 update: The amount of robot-like accesses has increased substantially and appear to not be associated with traditional (mostly) well behaved search engines. Instead this looks like efforts to harvest the RePEc data for unclear purposes (there are better ways of doing this and the data is freely available). There is also the issue of DDOS attacks which can distort the statistics if not properly identified. To properly identify these non-human accesses an additional set of filters is applied from January 2022. While likely also discarding some legitimate traffic this will provide more accurate counts. Overall this reduced number of abstract accesses by 54% and the number of downloads by 7%. The new set of rules have not been applied retroactively so there is a break in the time series that mainly affects abstract accesses. This update to the statistics is also discussed in the RePEc Blog.

We are continually working on improving the statistics and will add new filters over time.

Oddities in the statistics

There are sometimes more downloads than abstract views registered for a paper. This is primarily due to two reasons. New papers are announced in the New Economics Papers service. The service regularly sends out e-mail with information about new papers and the reader can download the paper by clicking a link in the e-mail and there is no abstract view registered for this paper. In addition Google Scholar sometimes links directly to the download link at one of the RePEc services rather than to the abstract page. This can lead to more downloads than abstract views being registered for older papers as well.

Programmatic access to the data

Occasionally we get requests about data for research purposes. While we do not have the resources to run special queries to create custom data sets all the results we present on the web are available in machine readable form as well as the standard html presentation.

Simply append the argument 'format=csv' and the data will be returned as a tab separated file. For example http://logec.repec.org/scripts/itemstat.pf?topnum=50;type=redif-paper;sortby=td;format=csv will return the 50 working papers with the highest total downloads. This is available for top working papers, journal articles, books, chapters, software, authors, working paper series, journals and rankings within working paper series and journals.

In addition there is a facility to obtain a list of the works claimed by an author. This is the authorworks.pf script. It takes one argument, id - the RePEc short-id of the author - and returns a text file with the handles of the works claimed by the author. For example, http://logec.repec.org/scripts/authorworks.pf?id=pka1. Detailed download statistics for each work can then be obtained with the paperstat.pf script.

Constructing a query

Use the web interface the construct the desired query, then add the format=csv parameter to get a downloadable file. In addition, some of the scripts takes arguments that are not directly available through the web interface:
authorstat.pf
author - the short-id of an author. Displays statistics for single author.

Credits

LogEc is run by Sune Karlsson on hardware provided by the Swedish Business School at Örebro University.

LogEc wouldn't be possible without the assistance and support of the maintainers of the participating services, Thomas Krichel, Christian Zimmermann, Sergei Parinov and José Manuel Barrueco.

The whole exercise would of course be pointless without the work of all the RePEc archive maintainers who provide the data about the working papers, articles and software items. And RePEc itself wouldn't be what it is without the continued effort of the RePEc Team.

Thanks also to Olaf Storbeck of the German business daily Handelsblatt for many useful suggestion on how to improve the statistics. The Handelsblatt runs a weekly "Economics" page which frequently features rankings of Economists of Economics papers based on LogEc data.


Page updated 2022-02-08