This is a timely paper with a lot of thought provoking ideas. I do see some potential for improvement though that I list below, referring to the line numbers. I hope this is helpful. Disclosure: I am a close colleague to one of the authors (Kramer).
50-51: By setting expectations fur publication records, they did indirectly influence where researchers published.
58-62: One part of your description of OA seems to include bronze and hybrid, another seems to exclude it.
in 101 you say that APC money is a major driver, but in 88 you say that in only three of the European countries there is a policy that favours gold. That seems a bit of a contradiction.
173-177 Perhaps also mention a general unease about taking unknown routes and researchers fearing their behaviour may be frowned upon by colleagues/peers?
178-179: Also mention the ScienceMetrix studies used by the EU?
300-305 To be able to assess these figures it would be nice to have them compared with overall Wellcome funded research output.
312 The section on author gender is somewhat unexpected, as it was not mentioned in the abstract and does not follow from the ideas set out in the introduction (gender not mentioned there as part of the OA discussion). Also there is no mention that it was something Wellcome wanted to address. Though it is interesting it perhaps needs better integration in the paper as a whole (or leave it out?). And again it would be nice to have comparisons with gender balance in overall output of Wellcome funded research
329: Indexing by whom? The platform itself? Google Scholar?
338-342: Is the point in time when review starts unknown? I would expect that to be the day when the reviewer accepts to review?
363: Suggest to use "review reports" instead of "review articles", that may be confusing.
368: Doesn't that depend on what was expected? What are the criteria?
370 What is the investment made by Wellcome?
429-434: I do not agree that setting stringent criteria per se limits the range of potential providers. There is nothing that would stop e.g. Cell Press or BMC from providing this?
493-494: Does the question whether it is difficult to achieve not depend on how easy it is for researchers to choose other venues? E.g. if EU mandates OA but only funds APCs up to say 500€, the platform may become popular. Also the situation may be different because of the much broader set of disciplines involved.
502-505: Are these all explicit goals of the funders mentioned?
548-550: Are you suggesting that funder platforms could have a reputation problem because people may think vetting/PR is less stringent just because it is intended for their own authors? If so, perhaps make that more clear/explicit because that would be a very crucial point. Is there any evidence for this. Would it for the reader feel like self-published material?
558: "in addition to publication elsewhere" could mean two things: publishing some on the own platform and some elsewhere, but also, given CC-BY- licenses, aggregating also publication on their own platform, duplicating the one originally published elsewhere. In terms of branding etc that may be a very interesting option.
589: As CoI (conflict of interest) is such an important issue it would be welcome to treat this in some more detail. For instance: would the be a difference between funders, companies, research institutes in such a CoI? Would there be a difference between organizations with a small scope (Pfizer) and a broad scope (EU). Is there a difference into what CoIs could cause, for instance lower standards or selectivity and would that compensate or strengthen selectivity effects of that funder's authors choice to publish on the own platform or elsewhere?
620: Perhaps mention the option to require that a platform is fully open source (such as OJS, Wikimedia or OSF). This is mentioned later (740), but could be given attention here as well.
650-651: That (transparency, high quality) may be only part of the answer, as trust and prestige are not only built on facts. Not matter how rigourous your process, if the platform is new and doesn't have famous names (either as organizations, authors or editors) attached to it it may be problematic. Hence the difference between .e.g bioRxiv and eLife on the one side and preprints.org and e.g. RIO (despite having a different focus than traditional journals) on the other. Also consider how Nature Scientific Reports overtook PLoS One. And even if you have famous names attached, building a reputation coast time (how many people have heard of e.g. ReScience?).
675: What is the reason to only look at the principles of open scholarly infrastructures and not also at e.g. TOP guidelines, scholarly commons principles, criteria from "Opening academic publishing - Development and application of systematic evaluation criteria" or the OCSDnet open science manifesto?
746: I do like the pledge to think bigger. However instead of linking up with other initiatives mentioned (that all play an important role in transitioning towards open science) funders could also consider more radical approaches to sharing information that is used and generated to solve scientific and societal issues they are concerned with as a funder. Moving away from the articles/papers paradigm, putting data first, integration of information used and generated as well as the review/assessment thereof by project funded, optimising output for machine readability and mining etc.
Finally and generally I would welcome more information on the (stated) reasons for these organisations to engage in publishing. What problem are they trying to solve and how does it fit their mission? Are these reasons the same everywhere? How were they supported with evidence or a vision?