Difference between revisions of "Research questions"

From Open Access Directory
Jump to navigation Jump to search
 
Line 300: Line 300:
  
 
:* Do this exercise for many different sets of important papers.  For example, all the papers by a given year's Nobel prize winners; the consensus set of the 100 papers most important for understanding HIV/AIDS or climate change; etc.
 
:* Do this exercise for many different sets of important papers.  For example, all the papers by a given year's Nobel prize winners; the consensus set of the 100 papers most important for understanding HIV/AIDS or climate change; etc.
 +
 +
==Open versus private==
 +
 +
:* There are many theoretical and empirical questions in the areas of what is public / open and what is private. A few thoughts posted [http://sustainingknowledgecommons.org/2015/05/12/wanting-to-be-open-does-not-mean-we-want-to-be-open-about-everything/ here].

Revision as of 16:19, 12 May 2015

Oad2.jpeg This list is part of the Open Access Directory.

  • This is a list of research questions in need of researchers. It started as an article in SOAN for May 2008.

Access

  • Publishers often assert that all or most of those who need access to peer-reviewed journal literature already have access. Who doesn't have access? What kinds of people don't have access and how well can we measure their numbers?
    • It's important to separate lay readers without access from professional researchers (in the academy, industry, and the professions) without access. Among professional researchers without access, it would help to classify by country and field.
    • It's also important to distinguish demand for access from people without access. Some of those without access may not care to have it. How well can we measure the demand for access among those who don't currently have it?
    • Can we redo the estimates annually in order to have a moving measurement of our progress in closing the access gap and meeting the unmet demand?
    • Related: Gathering Firm Evidence on the Limits of Toll Access, on the Research in progress page.
  • How accurately can a subscription journal measure the number of professional researchers in the relevant fields who don't have paid access to its contents?
    • How accurately can researchers, universities, libraries, and governments make their own measurements for a given journal, without access to its list of subscribers? How accurately, and how easily, can we reconstruct a journal's list of subscribers from OPACs?
  • What is the current rate of self-archiving in different fields and countries? Can we graph the change in these rates over time? Can we disentangle spontaneous self-archiving from self-archiving encouraged or required by funders and universities? Can we calculate both the percentage of self-archiving authors and the percentage of self-archived papers?
    • How accurately can we rank the disciplines by their levels of OA archiving? Even if the widely-held assumption is correct that physics is first, what's the rest of the picture?
  • To figure out the percentage of OA archiving, you need to start with figuring out how to estimate how much publishing there is in a particular discipline. Does anyone have any suggestions on how to estimate this?
  • What percentage of published articles from a given year or a given journal have OA copies somewhere online? Can we break this down by permitted copies and unpermitted ones? Can we break it down by OA preprints and OA postprints? Can we break it down by field? Can we collect these numbers easily enough to recompute them annually and chart future progress?
    • In April 2008, Bo-Christer Bork, Annikki Roos, and Mari Lauri released a paper doing a significant part of this calculation. Open Access News
  • Hypothesis: there is a rural / urban split in subscription-based access. This will appear in many countries, developed and developing alike. The largest gap will be evident for the public, followed by educational institutions. There will be some gaps even for research institutions, i.e. even a remote / field location of a research university will not always have the same access as the main campus.
    • This can be tested with a case scenario, e.g. a member of the public in a particular location. Does the location have a public library, and if so, what scholarly subscriptions, if any, are available? What about ILL services? Is there a local college or university campus - if so, is there walk-in access? If walk-in access is available but not close by, what would be involved for the person to take advantage? Is it a simple trip to a larger centre, something the person would be likely to do for other reasons such as shopping anyways? Or, is it an expensive trip?
    • This research would be very well-suited for collaboration in different regions.
    • Note likely confounding factor: there will also likely be a rural-urban split in quality and availability of internet. This may be inconsistent (e.g., some very remote locations are very well-served via satellite).
  • Hypothesis: there is a split in subscription-based access based on relative wealth, both across and within countries and sectors. A mid-sized college in a poorer area will tend to have fewer subscriptions and may not have as robust support for interlibrary loans services. Note: poorer colleges and universities need to attract students. Not all will be happy to advertise less access. Perhaps an anonymous study would help?
    • Demand for access may be due to lack of access. Is there research on demand for reading materials or right-to-read among illiterate people?
    • When Medline was made freely available, usage jumped a hundred fold. This may be worth exploring and documenting in detail. Is there a relationship between free availability of Medline and the emergence of evidence-based medicine? Possible approaches: usage statistics, surveys, interviews.
  • Is access only binary - you have it or you don't? Is there value in studying opportunity cost for pay-per-view or ILL?
  • There is an emerging consensus to allow (or even require) self-archiving for the final version of the author's peer-reviewed manuscript, and to give the publisher exclusivity for the published edition. Can we sample widely in different fields and come up with any useful generalizations on how widely these two versions differ from one another?
  • What is the average number of peer-reviewed scholarly journals to which public libraries subscribe? Can we break this down by field of journal and nationality of library?
    • One simple illustration of how this might be approached as a case study is Heather Morrison's The Access Gap in British Columbia. There will be scholarly journals in some of the aggregated journal packages that public libraries subscribe to. It would be worth comparing the number of titles with DOAJ or other OA lists, the number of non-embargoed titles.
    • Historical studies might be interesting, too. Willinsky talks about public libraries providing scholarly journals in The Access Principle. Have public libraries gone through periods where they were more likely to collect scholarly materials? Is this collections policy, or due to the cost of the journals?
  • Why does the OA impact advantage differ by field? What are the key variables?
    • Is the OA impact advantage correlative (e.g. quality bias - authors and publishers tend to make better papers open access) or causative in nature, or some combination?
  • How much economic value is produced by OA to research literature and data? That is, if the basic peer-reviewed literature and all associated data were OA, then what kinds of economic activity would that trigger and what is its total net value?
    • This is the overarching question of the EASI-OA (Economic and Social Impacts of Open Access) research project. There are many sub-questions awaiting exploration.
  • Let's say that a "loophole mandate" is a policy (at a university or funding agency) requiring green OA except when the publisher does not allow it. The policy gives an opt-out to publishers, whether or not it also gives one to authors. Let's say that a "non-loophole mandate" closes this loophole, even if it still gives an opt-out to authors.
    • How do loophole and non-loophole mandates compare in delivering green OA for the articles covered by the policy? What percentage of articles covered by loophole mandates are not made OA because publisher policies do not allow it? (Be careful not to confuse these questions with the "compliance rate" for loophole mandates; failing to deliver OA because the publisher doesn't allow it still "complies" with a loophole mandate.)
    • Basically, how many publishers take advantage of the loopholes in loophole mandates, and what is the effect on the overall percentage of OA the institution is able to achieve?
  • When a university library cancels a "big deal", what is the effect on research at the institution? How does the library cope (e.g. subscribing "a la carte" to some but not all of the titles formerly covered by the big deal, increasing the budget for pay-per-view and interlibrary loan)? How do faculty cope (e.g. using pay-per-view, interlibrary loan, emails to authors, doing without)?
  • Who actually accesses OA articles, and how many of them are not scientists? The increasing use of tools like Google Translate for reading OA articles might be taken as an indicator that the proportion of non-scientist readers with native languages other than English is growing. Perhaps best to address this with a cross-publisher survey?
  • How many OA publishers are there (at a given time)? Is this a number that could be tracked and updated by DOAJ, SHERPA, or OASPA?

Quality

  • Can open content be more easily screened and tagged for problems?
    • Also see the context on the origin of this question.

Journal business models

  • How is the recession affecting journals converting from TA to OA, and journals converting from OA to TA? Is the recession triggering more of one type than the other?
    • See the OAD lists of OA-TA conversions and TA-OA conversions. But be sure they are up to date before relying on them for such a study.
    • How do OA-TA conversions compare to TA journal failures? Is the recession doing more damage to OA journals or TA journals?
  • What is the breakdown of green journals by field? Why do the fields differ in this respect? What are the key variables?
  • Identify the publishers who do not yet allow author-initiated OA archiving. Sort them from largest to smallest (by numbers of journals or articles published), and break them down by field. If we redo the numbers every year, would we find that they are rising or falling?
    • If we identified the largest holdouts, how effectively would that information alone change behavior of researchers (as authors, referees, and editors)?
  • Hypothesis: author submissions will decrease at toll-access journals that do not allow author self-archiving, resulting in a decline in quantity of articles published, or quality, or both. Rationale: authors who are educated about open access, aware of the OA impact advantage, or impacted by open access policies, are likely to seek open access friendly venues for publishers. The rate of submission decline will vary, depending on the discipline, relevant open access policies, and the availability and credibility of open access alternatives (open access journals and self-archiving friendly journals). [Poetic version of hypothesis: Whither White, Fair RoMEO?
  • We know roughly what percentage of peer-reviewed journals are OA: ((number listed in the DOAJ / 25,000) * 100). But what percentage of newly launched peer-reviewed journals are OA? Have we reached the cross-over point when the majority of new launches are OA?
  • What percentage of peer-reviewed, free online journals go beyond removing price barriers to the removal of at least some permission barriers? Of those removing permission barriers, how many use a CC-BY license (or equivalent), a CC-BY-NC license (or equivalent), and so on?
    • Some thoughts from Heather Morrison on IJPE started the following section:
    • A simple way to obtain a rough measure of this is to review the DOAJ journals and note which journals use Creative Commons licensing, which license, and which ones use which license. It would be helpful to follow the DOAJ Subject lists for this, as there could be disciplinary differences, and also to note the publishers. BMC uses CC-BY, for example, and is a large enough publisher to impact the percentage of journals using CC-BY in biomedicine. It would be a good idea to download the DOAJ title list on a particular date, and work with that. Otherwise, the ongoing dramatic growth of DOAJ will mean your title list will keep changing as you research.
    • For a smaller but useful study, just look at one of the DOAJ subject areas, or a few very different subject areas to get some idea of the range. This could be useful information in and of itself, and/or as a pilot project for a larger study.
    • Another interesting research question would be whether DOAJ journals that are not using Creative Commons licensing consider their journals to be providing some aspect of permissions OA. A survey approach, working cooperatively with DOAJ to distribute the survey to DOAJ journals, might be workable.
  • Only a minority of OA journals charge author-side publication fees. What are the other OA journal business models? This may require a lot of emails and phone calls, since many journals don't give business details on their web sites. The first phase of this research is simply to document the range of models actually in use. The second phase is to study which models work best, and worst, in which niches.
    • The OAD has a list of OA journal business models. But the question is how far does that list cover the ground? How many business models or revenue sources does it not yet include?
  • For journals that charge publication fees:
    • What is the range of fees and the average? Can we break this down by field and country? Can we break it down by what the author (and readers) get for the money? For example, some publish the articles under open licenses and some take the fee and leave users with the limitations and uncertainties of fair use.
    • How many waive the fee in cases of economic hardship? What tests do they use, if any, to decide whether to give a waiver?
    • How many of the fees are paid by the author out of pocket and how many by the author's funder, the author's employer, or some other source?
  • Can conventional TA journals survive if they provide OA after an embargo period? If so, what is the shortest embargo period compatible with their survival? This will probably differ by field.
    • Since the rise of green OA hasn't yet triggered cancellations even in the field with the longest history and highest levels of self-archiving (physics), it may be too soon to make these measurements. But how well can we estimate them? How far can we base the estimates on actual renewal and cancellation decisions rather than on abstract preferences, current predictions, or hypothetical decisions?
    • One difficult variable is the effect of rising levels of green OA on subscriptions. On that one, we may just have to wait. But one variable we may be able to measure today is the rate at which usage and citations of articles (in a given field or from a given journal) decline after the date of publication.
  • If not all disciplines will be like physics (in which high-level OA archiving coexists with TA journals, and publishers cannot identify any cancellations attributable to OA archiving), then which disciplines will and will not be like physics? What are the key variables? How can we know?
    • Why have subscription-based journals survived in physics, with no cancellations attributable to OA archiving? Is this temporary or a sign of sustainable compatibility? Will the advent of SCOAP3 change their business models before we have a chance to answer this question?
  • If the rise of OA archiving starts to harm TA journals, will the journals tend to change their archiving policies (retreating from green), convert to gold OA, fold up, something else? Can we estimate how many journals would take each of these options? Can we break down the estimates by field and country? Can we identify the key variables in their decisions? Can we do better than merely asking editors and publishers for predictions or hypothetical decisions?
  • What percentage of the annual average TA journal price increase (overall or in a given field) is due to for-profit publishers and what percentage is due to non-profit publishers?
    • Can we compute these percentages retroactively to see the trend over the past few years?
  • How can we distinguish cancellations caused by the rise of OA from cancellations due to other causes?
    • Assuming we can distinguish the two types, how can we count the cancellations caused by the rise of OA?
  • How much does good journal management software reduce the cost of facilitating peer review and running a journal?
  • What percentage of journals pay editors or referees? What are the percentages by discipline and nation? When journals pay editors or referees, what are the average salaries, stipends, or fees?
  • There is a very wide range of claims about the cost of publishing journals (per page or per article) and a very wide range of claims about the prices charged for journals (per page or per article). What are the claims (who said what, when, and with respect to what journal or kind of journal)? Can we explain the differences among them and produce estimates (for certain kinds of journal) that all stakeholders could accept?
  • How many journals automatically deposit all their articles (or, all their OA articles) in OA repositories?
    • Of those that do, how many use their own repositories and how many use separate and independent repositories?
    • Case studies would be useful - document particular journals that are automatically depositing all articles. Are they depositing all content, or just research articles? Why have they made the decision to do this? Ensuring preservation and/or visibility of their journal? One approach would be to work with a particular archive (e.g. E-LIS or PubMedCentral) with a Journals list. A smaller study to help clarify the questions for a larger study could be very helpful.
  • What kinds of subsidies do TA journals get from public funds? Can we quantify these subsidies? How many countries pay these subsidies? How many publishers and journals benefit from them?
    • How large are the subsidies relative to publication costs? That is, how much do journals depend on these subsidies?
    • How large are the subsidies relative to the same country's support for green OA? For example, the NIH will spend about $2-4 million/year to implement its OA mandate, but spends about 10 times that amount ($30 million) in page charges and other subsidies for TA journals.
  • In the world of newspapers, OA allows the publisher to raise advertising rates and revenue (something realized in 2007 by the New York Times and Wall Street Journal). To what extent is the same true for scholarly journals? If there is an analogous increase for scholarly journals, how large is it? Does it vary by field?
  • How often do authors ask to retain more rights than a journal's standard contract allows? How often do journals accede to these author requests? Can we classify these attempts and successes by field, country, and terms requested by authors?
    • A small study could help to get this started. For example, interview researchers in one or a few departments at one university about their experiences with seeking more rights than what a standard contract allows. How many have sought additional rights? What methods did they use - one-on-one negotiations, author's addenda? What were the results? The recent requirement policy of the US NIH means that there are likely many US authors with recent experiences in this area. A qualitative study of this nature could provide a great set of questions to follow up with a survey approach. The ideal would be a survey representative of all researchers everywhere - a smaller sample would be very useful, though. If anyone tries this, please write up your research methodology and post a link under Research in Progress - this would greatly facilitate ad hoc collaborations. Someone else might use your survey questions! IJPE
    • How many journals which don't give blanket permission in advance for author initiated self-archiving routinely give permission on request?
    • How many journals have accepted the terms of a given author addendum? Can we compare the track records of different author addenda, and break down the results by field and country?
  • Some journals report increased submissions after shortening their embargo period or converting to OA. How general is this phenomenon? Is it more likely to arise in some fields than in others? How large are the increases and how can a given TA journal predict the result in its own case?
  • In his article on flipping a journal from TA to OA (SOAN for October 2007), Peter Suber said: "It's easy for a journal to measure the extent of the match between its reading and writing institutions. Simply calculate the percentage of authors [published in the journal] who are affiliated with subscribing institutions. Even journals that are quite sure they would never flip their business model should do the calculation. The door may be open [to flipping the business model to OA], and that's a fact worth knowing."
    • Can this calculation be done for a given journal by an outside researcher without access to the journal's list of subscribers, for example, by reconstructing an approximate subscriber list through OPACs?
  • How many TA journals have OA backruns? Can we break this down by field and country?
    • How many TA journals without an OA backrun would agree to make their backrun OA if only someone (like Google or the OCA) would pay to digitize it?
    • For TA journals with OA backruns, what is the average embargo or "moving wall" after which older issues become OA? Can we break down the average durations by field and country?
  • When authors submit work to colleagues for review even before submitting to a journal, the result is likely to be a higher quality submission, with less work required for peer review coordination and copyediting. Can we support this with evidence? More on this topic here.
  • What factors lead publishers to offer or not offer OA options? K.R. Eschenfelder's 2009 report asked these sorts of questions of archives, libraries and museums; Peter Suber suggested a similar study focused on publishers.
  • If a hybrid OA journal publisher promises to reduce subscription prices (roughly) in proportion to author uptake of the OA option, how can outsiders (authors, readers, subscribers) verify that the publisher is doing so? How can the publisher satisfy doubts that it is doing so?
  • Many universities have funds to pay publication fees at fee-based OA journals. See the OAD list of such funds.
    • If universities or other institutions wanted to offer serious financial support to no-fee OA journals, how could they do so?

Book business models

  • For a dual edition book (with OA and non-OA editions), how can we measure the sales the non-OA edition would have had in the absence of the OA edition? If we think the OA edition increased (or decreased) net sales, how can we measure that increase (or decrease)?
    • The effect of an OA edition on sales will probably vary by type of book (e.g. monograph v. encyclopedia), but exactly how?
    • If the OA edition comes out after the non-OA edition, how does the delay affect the impact on sales? If the OA edition comes out simultaneously with the non-OA edition, but is discontinued after a time, how does that affect the impact on sales?
    • For evidence and anecdotes on this question, see the oa.books.sales tag library for the OATP.
  • How will improvements in ebook readers affect the economics of dual-edition books? If OA editions currently increase the sales of print editions, how much of that effect is due to the fact that few people currently want to read a whole book on a screen?
    • Is there any evidence that OA sales bump is decreasing as the quality of ebook readers is increasing?
  • Does Google's opt-out Library Project increase the sales of scanned copyrighted books (as Google expects) or decrease their sales (as suing publisher and author groups fear)?

Software

  • Compare the free, libre and open-source (FLOSS) packages for creating and maintaining OA repositories. Which features are present and absent in each? Which are best suited for different kinds of users? Perhaps include some non-FLOSS packages.
    • This job was done Raym Crow in August 2004, and by several others since then for subsets of the available packages. But it needs to be done again for the full range and latest versions of FLOSS packages.

Researcher attitudes and practices

  • (Via Mark H. Wood) How are stakeholders using repository usage statistics? There are several dimensions to the question:
    • Type of repo: institutional, disciplinary, special-purpose
    • Type of user: organizational administrator, departmental administrator, sysadmin, application manager, programmer, contributor, curator, document user
    • Lots of folks have lots of ideas about what should be stored, counted, and presented, but do we know what different kinds of stat. users do with all these data products, what they want but can't get, and what they think they are learning?
  • Are researchers responding to funder and university OA policies by changing the patterns of where they submit their work for publication?
    • If so, what kinds of journals are receiving more submissions, attributable to OA policies, and what kinds are receiving fewer?
    • Is there any evidence that journals are anticipating these changes and altering their policies in order to benefit from increased submissions, or at least to avoid suffering from decreased submissions?
  • How many authors are already allowed to self-archive (by the journal in which they published) and have not self-archived? How many papers are already covered by these permissions but not yet self-archived? Can we break down the numbers by field, country, and year?
  • How many more researchers would routinely self-archive if they understood that it was lawful? If they understood that it took an average of 6-10 minutes/paper? If they knew that self-archiving increased citations 40-250% (on average, in different fields)?
    • How much of the failure to self-archive is due to ignorance and misunderstanding?
    • For researchers outside of academe, how much is due to organizational information policy?
  • Do junior faculty deposit their work in archives, or submit it to OA journals, more often than senior faculty, perhaps because they grew up with the internet and more readily see the benefits of OA? Or do senior faculty do so more often than junior faculty, perhaps because they already have tenure and can afford to disregard the criteria of conservative promotion and tenure committees?
    • How will self-archiving rates change as today's junior faculty become tenured? What about rates of submission to OA journals?
  • Let's say that 20% of researchers publish 80% of the peer-reviewed articles. (First, get the actual percentage of researchers who publish 80% of the articles. But here assume it's 20%.) Now study that 20%. What do those researchers know about OA journals and OA archiving? How often do they submit their work to OA journals, and how often do they deposit copies of their postprints in OA repositories? If these researchers are clustered in certain fields, countries, or institutions, where are the largest clusters?
  • When researchers learn about a TA article of interest to them, how often do they look online for an OA copy? When they do so, where do they look?
  • What are the "best practices" for universities to increase the rate of self-archiving? What do they cost? How much do they boost the rate of deposits?
    • If some practices work best in some niches or circumstances, then in which niches or circumstances?
    • One research project is simply to collect the strategies used at different institutions to populate their repositories. Another series of research projects is to study individual strategies to understand how well they work in different circumstances.
  • When faculty have a choice between equally prestigious and equally suitable OA and TA journals, will they submit their work to the OA journal? When the answer is no, what other variables come into play?
  • When a new journal is excellent but little-known, what steps will tend to earn it prestige in proportion to its quality?
  • To test the extent to which workflow issues delay self-archiving, interview or observe open access advocates to determine the lag time between philosophical commitment to self-archiving, and actual practice.
    • When articles about OA are published as TA, Peter Suber often notes this as "this is not OA - at least not so far". Do authors follow up? How long does this take?
  • "compare the author/reader overlap in scholarly output/process/input workflows with any other slice of life where two or more such overlapping interest stakeholders are involved in a jousting game - and then see if this gets us new aspects to look at" (cf. comment 18 of Fwd: Nature Communications: A breakthrough for open access? - public discussion thread prompted by Cameron Neylon, this sugegstion by Claudia Koltzenburg)
  • What do scholars want as readers? Libre open access means that the scholar can download materials to the scholar's own desktop, create multiple copies as desired, be sure of ongoing access even if they leave their current institution(especially important to students), add their own highlighting and annotating as desired, and share their copies (with or without annotations) with others. Since libre OA is only a small percentage of OA so far, scholars will have little experience with these potentials, so survey research would have limited usefulness. Research approaches that combine educating scholars about the potential with asking about the desirability of the potential would make more sense. Experimental approaches could be useful (create libre and various DRM-ridden versions of an article, and ask scholars to rate them on their usefulness, for example). Heather Morrison is very interested in this research question; see this. See also the work of Rick Kopak of the Public Knowledge Project and others on the OJS reading tools.

Universities

  • Arthus Sale has collected evidence that university-level OA mandates succeed in driving up the rate of repository deposits toward 100%.
    • As more universities adopt mandates, and as existing mandates have time to work, there are opportunities to build on Sale's data with data from other universities. What does the newer and larger picture reveal? If some mandates work better than others, what are the key variables?
    • If more than one kind of policy effectively drives the deposit rate toward 100%, then what are the variations within the family of effective policies? This will matter if a given institution would find it politically easier to adopt one kind of policy than another.
  • In a May 2009 blog post, Richard Poynder asked a series of good question about university OA mandates:
    • Can we expect the surge of new mandates to achieve the same levels of compliance reported by [Arthur] Sale?
    • Is there any significance in the fact that many of the new mandates are being introduced by library faculties, and can we expect that to affect compliance rates?
    • Will the fact that many of the new mandates are self-imposed affect compliance rates? (Will it make them appear more voluntary than mandatory)?
    • Will the fact that many of the new mandates include opt-outs affect compliance rates? (Will that make them appear more voluntary than mandatory?)
    • What is full compliance so far as a self-archiving mandate is concerned? (Is Sale’s 70% level the objective, or should the research community be aiming higher?)
  • What lessons can be learned from institutions which have adopted OA mandates about how to formulate the policy, propose it, educate faculty and administrators about the issues, and build consensus for its adoption?
  • If universities require faculty to retain key rights when publishing journal articles, with no opt-outs, would that
  1. help authors in their negotiations with journals, or
  2. hurt authors by causing journals to reject their papers?
  • If the answer is sometimes the former and sometimes the latter, then what are the key variables (for example, the size and prestige of the university, the number of universities with similar policies, the prestige of the journal)? What kinds of policies are more likely to help authors?
  • When universities require faculty to retain key rights when publishing journal articles, but allow opt-outs (like the Harvard policy of February 2008), then how often do faculty request opt-outs? How often do journals demand that authors request opt-outs, and how often do authors refuse these demands? How many journals would actually reject a paper (as opposed to threatening to reject a paper) from an author who refused to seek an opt-out? Would this rate decline as more universities adopted similar policies?
  • How much time does it take for a university to create and maintain an OAI-compliant OA repository? How much does it cost the university, in hardware, software, and human resources? If it depends on how much the university wants to do with the repository, and how much to educate users, then can we break down the costs for each layer of use and service?
  • When universities recommend or encourage use of an author addendum, how many faculty actually try it? How do universities stand behind the authors whose addendum was rejected? (What practices are actually in use?) How effective are these university actions in getting journals to accept rather than reject an addendum?
  • How many universities mandate OA for electronic theses and dissertations? Can we break this down by university size and country?
  • Can we put together some "best practices" for policy exemptions and embargoes?
  • For a given university and its actual serials budget and research output: If we assume that all journals convert to OA and charge a given publication fee per paper (say, $500), then will the total budget needed to cover faculty publication fees be larger or smaller than the budget now devoted to TA journals? This calculation has been done before, at least three times, and in each case the authors assumed that all OA journals would charge publication fees and that all fees would be paid by universities. The purpose of redoing the calculation is to make the assumptions more realistic. First do the baseline calculation, above. Then: how does the number change when 10% (then 20%, then 30% etc.) of OA journals charge no publication fee at all? How does the number change when 10% (then 20%, then 30% etc.) of those fees are paid by funding agencies rather than universities?
  • For more detail on the previous calculations and recommended refinements, see Peter Suber's article from SOAN for June 2006.
  • What percentage of university libraries have negotiated full access privileges for walk-in patrons? Is the trajectory up or down?
  • How successful have libraries been in negotiating licensing terms that benefit authors (e.g. providing permission for self-archiving), not just terms that benefit readers? What tactics are likely to increase that success rate?
  • A handful of universities are trying to negotiate better terms for their authors (not just their readers), but most of them are not yet ready to discuss their experience in public. How many universities are already trying these negotiations? How would their prospects improve if more universities joined the effort?

Libraries

  • Library support for open access. (Can be by nature of support, type / size of library and region / location). Types of support:
  • Journal hosting (see Karla Hahn's recent ARL study on library publishing activities) [date? link?]
  • Payment of article processing fees
  • Economic support (and/or commitments) for open access initiatives, e.g. SCOAP3, Stanford Encyclopedia of Philosophy
  • Institutional and disciplinary repositories (note: the Canadian Association of Research Libraries does an annual survey). # of repositories, type, contents, staffing by type and FTE.
  • Educational - # and type of workshops. See recent SPEC kit on scholarly communications activities.
  • Library use of open access resources. (Can be by nature of use, type / size of library and region / location).
  • Inclusion of DOAJ and other open access lists in link resolving services. (Statistics on hits / downloads?).
  • Local open access collection development.
  • Interlibrary loan searching for open access resources.
  • Library support for open access as measured by library website design.
  • Is there a link to information about open access and scholarly communication? If so, where - the library home page, placed prominently or not-so-prominently, several clicks in, etc.?
  • Does this differ by library type, size, region or country?
  • Is there a correlation between library OA support and OA success at the organization?
  • How accurately can a university measure the "ullage" of its library?
  • In 2004, Peter Suber introduced the term "ullage" --after the word for the empty space at the top of a wine bottle-- for the gap between what a university directly offers through its library and the totality of literature to which faculty and students might need access.
  • Can a university measure its ullage in absolute numbers (of journals or journal articles) and percentages (of the journal literature in certain fields)? Can we agree on the total number of journal articles (for a given year and perhaps for a given field) so that different universities could use the same denominators when making their calculations and comparing their results?
  • Perhaps it would be most beneficial to first illustrate ullage at the world's largest research libraries.
  • Can researchers outside a university compute its ullage using public information? Can we do this for a wide range of institutions and map where ullage is higher and where it is lower?

Funding agencies

  • What is the amount of the average research grant? How does that compare with the average cost of publishing a peer-reviewed journal article? (This is to compare the value added by funders with the value added by publishers.)
  • We know roughly how much it costs the NIH to implement its OA policy ($2-4 million/year). How much would it cost to expand the policy to cover all federally funded research? Can we do the same calculation for other countries?
  • If NIH agreed to pay processing fees charged by OA journals for all the journal articles produced by its research grants, assuming (say) $500 per paper, then how much would that be? How much would it be if it imposed a cap of 3 (then 5, 7, 9 etc.) on the number of fees paid per research project? For a given cap (say, 3 fees per research project), what percentage of the average grant budget would be devoted to fees rather than new research?
  • Of funders with OA policies, what percentage allow grantees to use grant funds to pay publication fees at fee-based OA journals? What percentage allow grantees to apply for supplemental funding to pay such fees?
  • What percentage of publication fees (at fee-based OA journals) is paid by funding agencies? Can we put this in a larger picture showing the percentage paid by universities, the percentage paid by researchers out of pocket, and the percentage waived by the journal?
  • How much grant funding under indirect costs of research currently goes to pay for library subscriptions? Would researchers agree to redirect some of this funding to pay OA article processing fees, and/or contribute to library OA publishing programs?
  • What percentage of non-classified research in a given field is funded by government funding agencies? by private funding agencies? by universities? by other sources? is not funded at all?
  • Start with a given country and go through all the fields of the natural sciences, social sciences, and humanities. Then do the same for some other countries.
  • What dollar amounts are associated with these percentage figures?
  • Take any set of important papers, for example, the top 10% by citation impact in 10 different fields. What percentage of them are based on publicly-funded research? That is, what percentage of them would be OA if the relevant government had adopted an OA mandate for publicly-funded research? What percentage are OA today?
  • Do this exercise for many different sets of important papers. For example, all the papers by a given year's Nobel prize winners; the consensus set of the 100 papers most important for understanding HIV/AIDS or climate change; etc.

Open versus private

  • There are many theoretical and empirical questions in the areas of what is public / open and what is private. A few thoughts posted here.