The mloss.org community bloghttp://mloss.org/communitySome thoughts about machine learning open source softwareenMon, 30 Mar 2015 15:24:13 -0000MLOSS workshop at ICML 2015: Open Ecosystemshttp://mloss.org/community/blog/2015/mar/30/mloss-workshop-at-icml-2015-open-ecosystems/<p>MLOSS workshops are returning to ICML this summer!</p> <p>Key dates:</p> <ul> <li>Submission DL 28 April 2015</li> <li>Workshop date 10 July 2015</li> </ul> <p>We (Gaël Varoquaux, Cheng Soon Ong and Antti Honkela) are organising another MLOSS workshop at ICML 2015 in Lille, France this July. The theme for this edition is "Open Ecosystems" whereby we wish to invoke discussion on benefits (or drawbacks?) of multiple tools in the same ecosystem. Our invited speakers (John Myles White and Matthew Rocklin) will share some of their experiences on Julia and Python, and we would be happy to hear from others either on the same or different ecosystems through contributed talks. Usual demonstrations of new great software are naturally also welcome!</p> <p>In addition to the talks, we have planned two more active sessions:</p> <ul> <li>an open discussion with themes voted by workshop participants similar to <a href="http://mloss.org/workshop/nips13/">MLOSS 2013</a>; and</li> <li>a hackathon for planning and starting to develop infrastructure for measuring software impact.</li> </ul> <p>If you have any comments or suggestions regarding these, please add a comment here or email the organisers!</p> <p>More details at the workshop website at <a href="http://mloss.org/workshop/icml15/">http://mloss.org/workshop/icml15/</a></p>Antti HonkelaMon, 30 Mar 2015 15:24:13 -0000http://mloss.org/community/blog/2015/mar/30/mloss-workshop-at-icml-2015-open-ecosystems/A third of the top 100 papers are about softwarehttp://mloss.org/community/blog/2014/oct/30/a-third-of-the-top-100-papers-are-about-software/<p>How many of the papers that are in the top 100 most cited about software?</p> <p>21, with an additional 12 papers which are not specifically about software itself, but about methods or statistics that were implemented later in software. When you take a step back and think about the myriad areas of research and the stratospheric numbers of citations the top 100 get, it is quite remarkable that one fifth of the papers are actually about software. I mean really about software, not software as an afterthought. Some examples:</p> <ul> <li><a href="http://dx.doi.org/10.1093/nar/25.24.4876">The CLUSTAL_X Windows interface: Flexible strategies for multiple sequence alignment aided by quality analysis tools.</a></li> <li><a href="http://dx.doi.org/10.1107/S0021889892009944">PROCHECK: a program to check the stereochemical quality of protein structures.</a></li> <li><a href="http://dx.doi.org/10.1093/bioinformatics/btg180">MrBayes 3: Bayesian phylogenetic inference under mixed models</a></li> </ul> <p>To put in perspective how rarified the air is in the top 100 citations, the if we combined all citations received by all JMLR papers in the last five years (according to <a href="http://www.scimagojr.com/journalsearch.php?q=20969&amp;tip=sid">SCImago</a>), this one gigantic paper would not even make it into the top 100.</p> <p>Yes, yes, citations do not directly measure the quality of the paper, and there are size of community effects and all that. To be frank, being highly cited seems to be mostly luck.</p> <p>In the spirit of open science, here is a <a href="https://plot.ly/8/~cong">bar plot</a> showing these numbers, and here is <a href="https://plot.ly/~cong/7">my annotated table</a> which I updated from <a href="http://www.nature.com/polopoly_fs/7.21247!/file/WebofSciencetop100.xlsx">the original table</a>. For a more mainstream view of the data, look at the <a href="http://www.nature.com/news/the-top-100-papers-1.16224">Nature article</a>.</p>Cheng Soon OngThu, 30 Oct 2014 11:07:44 -0000http://mloss.org/community/blog/2014/oct/30/a-third-of-the-top-100-papers-are-about-software/Open Machine Learning Workshophttp://mloss.org/community/blog/2014/jul/28/open-machine-learning-workshop/<p>Just in case there are people who follow this blog but not <a href="http://hunch.net/?p=2787">John Langford's</a>, there is going to be an <a href="http://hunch.net/~nyoml/">open machine learning workshop</a> on 22 August 2014 in <a href="http://research.microsoft.com/en-us/labs/newyork/default.aspx">MSR, New York</a>, organised by <a href="http://research.microsoft.com/en-us/um/people/alekha/">Alekh Agarwal</a>, <a href="http://hunch.net/~beygel/">Alina Beygelzimer</a>, and <a href="http://hunch.net/~jl">John Langford</a>.</p> <p>As it says on John's blog: If you are interested, please email msrnycrsvp at microsoft.com and say “I want to come” so we can get a count of attendees for refreshments.</p>Cheng Soon OngMon, 28 Jul 2014 12:39:44 -0000http://mloss.org/community/blog/2014/jul/28/open-machine-learning-workshop/Machine Learning Distrohttp://mloss.org/community/blog/2014/jul/22/machine-learning-distro/<p>What would you include a linux distribution to customise it for machine learning researchers and developers? Which are the tools that would cover the needs of 90% of PhD students who aim to do a PhD related to machine learning? How would you customise a mainstream linux distribution to (by default) include packages that would allow the user to quickly be able to do machine learning on their laptop?</p> <p>There are several communities which have their own custom distribution:</p> <ul> <li><a href="https://www.scientificlinux.org">Scientific Linux</a> which is based on <a href="http://www.redhat.com/products/enterprise-linux/">Red Hat Enterprise Linux</a> is focused making it easy for system administrators of larger organisations. The two big users are <a href="http://fermilinux.fnal.gov/">FermiLab</a> and <a href="http://linux.web.cern.ch/linux/">CERN</a> who each have their own custom "spin". Because of its experimental physics roots, it does not have a large collection of pre-installed scientific software, but makes it easy for users to install their own.</li> <li><a href="http://nebc.nerc.ac.uk/tools/bio-linux/bio-linux-7-info">Bio-Linux</a> is at the other end of the spectrum. Based on <a href="http://www.ubuntu.com/">Ubuntu</a>, it aims to provide a easy to use bioinformatics workstation by including more than 500 bioinformatics programs, including graphical menus to them and sample data for testing them. It is targeted at the end user, with simple instructions for running it Live from DVD or USB, to install it, and to dual boot it.</li> <li><a href="https://spins.fedoraproject.org/scientific-kde/">Fedora Scientific</a> is the latest entrant, providing a nice list of numerical tools, visualisation packages and also LaTeX packages. Its <a href="http://fedora-scientific.readthedocs.org/en/latest/">documentation</a> lists packages for C, C++, Octave, Python, R and Java. Version control is also not forgotten. A recent <a href="http://opensource.com/life/14/6/linux-distribution-science-geeks">summary of Fedora Scientific</a> was written as part of Open Source Week.</li> </ul> <p>It would seem that Fedora Scientific would satisfy the majority of machine learning researchers, since it provides packages for most things already. Some additional tools that may be useful include:</p> <ul> <li>tools for managing experiments and collecting results, to make our papers <a href="http://ivory.idyll.org/blog/2014-our-paper-process.html">replicable</a></li> <li>GPU packages for CUDA and OpenCL</li> <li>Something for managing papers for reading, similar to Mendeley</li> <li>Something for keeping track of ideas and to do lists, similar to Evernote</li> </ul> <p>There's definitely tons of stuff that I've forgotten!</p> <p>Perhaps a good way to start is to have the list of package names useful for the machine learning researcher in some popular package managers such as yum, apt-get, dpkg. Please post your favourite packages in the comments.</p>Cheng Soon OngTue, 22 Jul 2014 00:00:25 -0000http://mloss.org/community/blog/2014/jul/22/machine-learning-distro/Google Summer of Code 2014http://mloss.org/community/blog/2014/jun/03/google-summer-of-code-2014/<p><a href="https://www.google-melange.com/gsoc/homepage/google/gsoc2014">GSoC 2014</a> is between 19 May and 18 August this year. The students should now be just sinking their teeth into the code, and hopefully having a lot of fun while gaining invaluable experience. This amazing program is in its 10th year now, and it is worth repeating how it benefits everyone:</p> <ul> <li> <p><em>students</em> - You learn how to write code in a team, and work on projects that are long term. Suddenly, all the software engineering lectures make sense! Having GSoC in your CV really differentiates you from all the other job candidates out there. Best of all, you actually have something to show your future employer that cannot be made up.</p> </li> <li> <p><em>mentors</em> - You get help for your favourite feature in a project that you care about. For many, it is a good introduction to project management and supervision.</p> </li> <li> <p><em>organisation</em> - You recruit new users and, if you are lucky, new core contributors. GSoC experience also tends to push projects to be more beginner friendly, and to make it easier for new developers to get involved.</p> </li> </ul> <p>I was curious about how many machine learning projects were in GSoC this year and wrote a small <a href="http://nbviewer.ipython.org/urls/gist.githubusercontent.com/chengsoonong/dede21b2eefa43b30d14/raw/ca651ecc818cca66424b48c40e7acda5267dff79/gsoc2014-machine-learning.ipynb">ipython notebook</a> to try to find out.</p> <p>Looking at the organisations with the most students, I noticed that the Technical University Vienna has come together and joined as a mentoring organisation. This is an interesting development, as it allows different smaller projects (the titles seem disparate) to come together and benefit from a more sustainable open source project.</p> <p>On to machine learning... Using a bunch of heuristics, I tried to identify machine learning projects from the organisation name and project titles. I found more than 20 projects with variations of "learn" in them. This obviously misses out projects from R some of which are clearly machine learning related, but I could not find a rule to capture them. I am pretty sure I am missing others too. I played around with some topic modelling, but this is hampered by the fact that I could not figure out a way to scrape the project descriptions from the dynamically generated list of project titles on the GSoC page.</p> <p>Please update the <a href="https://gist.github.com/chengsoonong/dede21b2eefa43b30d14">source</a> with your suggestions!</p>Cheng Soon OngTue, 03 Jun 2014 00:01:00 -0000http://mloss.org/community/blog/2014/jun/03/google-summer-of-code-2014/Reproducibility is not simplehttp://mloss.org/community/blog/2014/mar/30/reproducibility-is-not-simple/<p>There has been a flurry of articles recently outlining 10 simple rules for X, where X has something to do with data science, computational research and reproducibility. Some examples are:</p> <ul> <li><a href="https://www.authorea.com/users/3/articles/3410/_show_article">10 Simple Rules for the Care and Feeding of Scientific Data</a> (kudos for using a open collaborative writing tool!)</li> <li><a href="http://www.ploscollections.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003285">Ten Simple Rules for Reproducible Computational Research</a></li> <li><a href="http://www.ploscollections.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1003506">Ten Simple Rules for Effective Computational Research</a></li> </ul> <h2>Best practices</h2> <p>These articles provide a great resource to get started on the long road to doing "proper science". Some common suggestions which are relevant to practical machine learning include:</p> <h5>Use version control</h5> <p>Start now. No, not after your next paper, do it right away! Learn one of the modern distributed version control systems, <a href="http://git-scm.com/">git</a> or <a href="http://mercurial.selenic.com/">mercurial</a> currently being the most popular, and get an account on <a href="https://github.com/">github</a> or <a href="https://bitbucket.org">bitbucket</a> to start sharing. Even if you don't share your code, it is a convenient offsite backup. Github is the most popular for open source projects, but bitbucket has the advantage of free private accounts. If you have an email address from an educational institution, you get the premium features for free too.</p> <p>Distributed version control systems can be conceptually daunting, but it is well worth the trouble to understand the concepts instead of just robotically type in commands. There are numerous tutorials out there, and here are some which I personally found entertaining, <a href="http://matthew-brett.github.io/pydagogue/foundation.html">git foundations</a> and <a href="http://hginit.com/">hginit</a>. For those who don't like the command line, have a look at GUIs such as <a href="http://www.sourcetreeapp.com/">sourcetree</a>, <a href="https://code.google.com/p/tortoisegit/">tortoisegit</a>, <a href="http://tortoisehg.bitbucket.org/">tortoisehg</a>, and <a href="http://git-scm.com/docs/gitk.html">gitk</a>. If you work with other people, it is worth learning the <a href="http://nathanhoad.net/git-workflow-forks-remotes-and-pull-requests">fork and pull request</a> model, and use the <a href="http://nvie.com/posts/a-successful-git-branching-model/">gitflow</a> convention.</p> <p>Please add your favourite tips and tricks in the comments below!</p> <h5>Open source your code and scripts</h5> <p>Publish everything. Even the two lines of Matlab that you used to plot your results. The readers of your NIPS and ICML papers are technical people, and it is often much simpler for them to look at your Matlab plot command than to parse the paragraph that describes the x and y axes, the meaning of the colours and line types, and the specifics of the displayed error bars. Tools such as <a href="http://ipython.org/notebook.html">ipython notebooks</a> and <a href="http://yihui.name/knitr/">knitr</a> are examples of easy to implement literate programming frameworks that allow you to make your supplement a live document.</p> <p>It is often useful to try to conceptually split your computational code into "programs" and "scripts". There is no hard and fast rule for where to draw the line, but one useful way to think about it is to contrast code that can be reused (something to be installed), and code that runs an experiment (something that describes your protocol). An example of the former is your fancy new low memory logistic regression training and testing code. An example of the latter is code to generate your plots. Make both types of code open, document and test them well.</p> <h5>Make your data a resource</h5> <p>Your result is also data. When open data is mentioned, most people immediately conjure images of the inputs to prediction machines. But intermediate stages of your workflow are often left out of making things available. For example, if in addition to providing the two lines of code for plotting, you also provided your multidimensional array containing your results, your paper now becomes a resource for future benchmarking efforts. If you made your precomputed kernel matrices available, other people can easily try out new kernel methods without having to go through the effort of computing the kernel.</p> <p>Efforts such as <a href="mldata.org">mldata.org</a> and <a href="mlcomp.org">mlcomp.org</a> provide useful resources to host machine learning oriented datasets. If you do create a dataset, it is useful to get <a href="https://www.datacite.org/">an identifier</a> for it so that people can give you credit.</p> <h2>Challenges to open science</h2> <p>While the articles call these rules "simple", they are by no means easy to implement. While easy to state, there are many practical hurdles to making every step of your research reproducible .</p> <h5>Social coding</h5> <p>Unlike publishing a paper, where you do all your work before publication, publishing a piece of software often means that you have to support it in future. It is <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0024914">remarkably difficult</a> to keep software available in the long term, since most junior researchers move around a lot and often leave academia altogether. It is also challenging to find contributors that can help out in stressful periods, and to keep software up to date and useful. Open source software suffers from the tragedy of the commons, and it quickly becomes difficult to maintain.</p> <p>While it is generally good for science that everything is open and mistakes are found and corrected, the current incentive structure in academia does not reward support for ongoing projects. Funding is focused on novel ideas, publications are used as metrics for promotion and tenure, and software gets left out.</p> <h5>The secret branch</h5> <p>When developing a new idea, it is often tempting to do so without making it open to public scrutiny. This is similar to the idea of a development branch, but you may wish to keep it secret until publication. The same argument applies for data and results, where there may be a moratorium. I am currently unaware of any tools that allow easy conversion between public and private branches. Github allows forks of repositories, which you may be able to make private. </p> <p>Once a researcher gets fully involved in an application area, it is inevitable that he starts working on the latest data generated by his collaborators. This could be the real time stream from Twitter or the latest double blind drug study. Such datasets are often embargoed from being made publicly available due to concerns about privacy. In the area of biomedical research there are efforts to allow bona fide researchers access to data, such as <a href="http://www.ncbi.nlm.nih.gov/gap">dbGaP</a>. It seamlessly provides a resource for public and private data. Instead of a hurdle, a convenient mechanism to facilitate the transition from private to open science would encourage many new participants.</p> <p>What is the right access control model for open science?</p> <h5>Data is valuable</h5> <p>It is a natural human tendency to protect a scarce resource which gives them a competitive advantage. For researchers, these resources include source code and data. While it is understandable that authors of software or architects of datasets would like to be the first to benefit from their investment, it often happens that these resources are not made publicly available even after publication.</p>Cheng Soon OngSun, 30 Mar 2014 11:16:11 -0000http://mloss.org/community/blog/2014/mar/30/reproducibility-is-not-simple/Keynotes at ACML 2013http://mloss.org/community/blog/2013/nov/14/keynotes-at-acml-2013/<p>We were very lucky this year to have an amazing set of keynote speakers at ACML 2013 who have made key contributions to getting machine learning into the real world. Here are some links to the open source software projects that they mentioned during their talks. The videos of the talks should be available at some point on the <a href="http://acml2013.conference.nicta.com.au/">ACML website</a></p> <p>We started off with Geoff Holmes, who spoke at <a href="http://mloss.org/workshop/nips06/">MLOSS 06</a>. He told us about how <a href="http://www.cs.waikato.ac.nz/ml/weka/">WEKA</a> has been used in industry (satisfying Kiri Wagstaff's Challenge #2), and the new project for streaming data <a href="http://moa.cms.waikato.ac.nz/">MOA</a>. Later in the day, Chih-Jen Lin told us how important it was to understand both machine learning and optimisation, such that you can exploit the special structure for fast training of SVMs. This is how he obtained amazing speedups in <a href="http://www.csie.ntu.edu.tw/~cjlin/liblinear/">LIBLINEAR</a>. On the second day, Ralf Herbrich (who also gave a tutorial) gave us a behind the scenes tour of TrueSkill, the player matching algorithm used on XBox Live. Source code in F# is available <a href="http://blogs.technet.com/b/apg/archive/2008/06/16/trueskill-in-f.aspx">here</a> and the version generalised to track skill over time is available <a href="http://blogs.msdn.com/b/dsyme/archive/2012/04/19/updated-version-of-quot-trueskill-through-time-quot-bayesian-inference-code.aspx">here</a>.</p> <p>Thanks to Geoff, Chih-Jen and Ralf for sharing their enthusiasm!</p>Cheng Soon OngThu, 14 Nov 2013 16:18:40 -0000http://mloss.org/community/blog/2013/nov/14/keynotes-at-acml-2013/What does the “OSS” in MLOSS mean?http://mloss.org/community/blog/2013/sep/01/what-does-the-oss-in-mloss-mean/<p>I was recently asked to become an Action Editor for the <a href="http://jmlr.org/mloss/">Machine Learning and Open Source Software (MLOSS)</a> track of Journal of Machine Learning Research. Of course, I gladly accepted since the <a href="http://jmlr.org/mloss/mloss-info.html">aim</a> of the JMLR MLOSS track (as well as the <a href="http://mloss.org/about/">broader MLOSS project</a>) -- to encourage the creation and use of open source software within machine learning -- is well aligned with my own interests and attitude towards scientific software.</p> <p>Shortly after I joined, one of the other editors raised a question about how we are to interpret an item in the <a href="http://jmlr.org/mloss/mloss-info.html">review criteria</a> that states that reviewers should consider the "freedom of the code (lack of dependence on proprietary software)" when assessing submissions. What followed was an engaging email discussion amongst the Action Editors about the how to clarify our position. </p> <p>After some discussion (summarised below), we settled on the following guideline which tries to ensure MLOSS projects are as open as possible while recognising the fact that MATLAB, although "closed", is nonetheless widely used within the machine learning community and has an open "work-alike" in the form of <a href="http://www.gnu.org/software/octave/">GNU Octave</a>:</p> <blockquote> <p><strong>Dependency on Closed Source Software</strong></p> <p>We strongly encourage submissions that do not depend on closed source and proprietary software. Exceptions can be made for software that is widely used in a relevant part of the machine learning community and accessible to most active researchers; this should be clearly justified in the submission.</p> <p>The most common case here is the question whether we will accept software written for Matlab. Given its wide use in the community, there is no strict reject policy for MATLAB submissions, but we strongly encourage submissions to strive for compatibility with Octave unless absolutely impossible.</p> </blockquote> <h2>The Discussion</h2> <p>There were a number of interesting arguments raised during the discussion, so I offered to write them up in this post for posterity and to solicit feedback from the machine learning community at large.</p> <h3>Reviewing and decision making</h3> <p>A couple of arguments were put forward in favour of a strict "no proprietary dependencies" policy.</p> <p>Firstly, allowing proprietary dependencies may limit our ability to find reviewers for submissions -- an already difficult job. Secondly, stricter policies have the benefit of being unambiguous, which would avoid future discussions about the acceptability of future submission.</p> <h3>Promoting open ports</h3> <p>An argument made in favour of accepting projects with proprietary dependencies was that doing so may actually increase the chances of its code being forked to produce a version with no such dependencies. </p> <p><a href="http://mikiobraun.de">Mikio Braun</a> explored this idea further along with some broader concerns in a <a href="http://blog.mikiobraun.de/2013/08/curation-collaboration-science.html">blog post</a> about the role of curation and how it potentially limits collaboration.</p> <h3>Where do we draw the line?</h3> <p>Some of us had concerns about what exactly constitutes a proprietary dependency and came up with a number of examples that possibly fall into a grey area.</p> <p>For example, how do operating systems fit into the picture? What if the software in question only compiles on Windows or OS X? These are both widely used but proprietary. Should we ensure MLOSS projects also work on Linux?</p> <p>Taking a step up the development chain, what if the code base is most easily built using proprietary development tools such as Visual Studio or XCode? What if libraries such as MATLAB's Statistics Toolbox or Intel's MKL library are needed for performance reasons?</p> <p>Things get even more subtle when we note that certain data formats (e.g., for medical imaging) are proprietary. Should such software be excluded even though the algorithms might work on other data?</p> <p>These sorts of considerations suggested that a very strict policy may be difficult to enforce in practice.</p> <h3>What is our focus?</h3> <p>It is pretty clear what position Richard Stallman or other fierce free software advocates would take on the above questions: reject all of them! It is not clear that such an extreme position would necessarily suit the goals of the MLOSS track of JMLR.</p> <p>Put another way, is the focus of MLOSS the "ML" or the "OSS"? The consensus seemed to be that we want to promote open source software to benefit machine learning, not the other way around. </p> <h2>Looking At The Data</h2> <p>Towards the end of the discussion, I made the argument that if we cannot be coherent we should at least be consistent and presented some data on all the <a href="http://jmlr.org/mloss/">accepted MLOSS submissions</a>. The list below shows the breakdown of languages used by the 50 projects that have been accepted to the JMLR track to date. I'll note that some projects use and/or target multiple languages and that, because I only spent half an hour surveying the projects, I may have inadvertently misrepresented some (if I've done so, let me know).</p> <p><strong>C++</strong>: 15; <strong>Java</strong>: 13; <strong>MATLAB</strong>:11; <strong>Octave</strong>: 10; <strong>Python</strong>:9; <strong>C</strong>: 5; <strong>R</strong>: 4. </p> <p>From this we can see that MATLAB is fairly well-represented amongst the accepted MLOSS projects. I took a closer look and found that of the 11 projects that are written in (or provide bindings for) MATLAB, all but one of them provide support for GNU Octave compatibility as well.</p> <h2>Closing Thoughts</h2> <p>I think the position we've adopted is realistic, consistent, and suitably aspirational. We want to encourage and promote projects that strive for openness and the positive effects it enables (e.g., reproducibility and reuse) but do not want to strictly rule out submissions that require a widely used, proprietary platform such as MATLAB. </p> <p>Of course, a project like MLOSS is only as strong as the community it serves so we are keen to get feedback about this decision from people who use and create machine learning software so feel free to leave a comment or contact one of us by email.</p> <p><strong>Note: This is a cross-post from Mark's blog at <a href="http://mark.reid.name/blog/what-does-the-oss-in-mloss-mean.html">Inductio ex Machina</a></strong>.</p>Mark ReidSun, 01 Sep 2013 10:33:28 -0000http://mloss.org/community/blog/2013/sep/01/what-does-the-oss-in-mloss-mean/Code review for sciencehttp://mloss.org/community/blog/2013/aug/14/code-review-for-science/<p>How good is the software associated with scientific papers? There seems to be a general impression that the quality of scientific software is not that great. How do we check for software quality? Well, by doing code review.</p> <p>In an interesting experiment between the <a href="https://wiki.mozilla.org/ScienceLab">Mozilla Science Lab</a> and <a href="http://www.ploscompbiol.org/">PLoS Computational Biology</a>, a selected number of papers with snippets of code from the latter will be reviewed by engineers from the former.</p> <p>For more details see the <a href="http://kaythaney.com/2013/08/08/experiment-exploring-code-review-for-science/">blog post</a> by Kaitlin Thaney.</p>Cheng Soon OngWed, 14 Aug 2013 00:00:01 -0000http://mloss.org/community/blog/2013/aug/14/code-review-for-science/GSoC 2013http://mloss.org/community/blog/2013/apr/09/gsoc-2013/<p>GSoC has <a href="http://google-opensource.blogspot.com.au/2013/04/mentoring-organizations-for-google.html">just announced</a> the list of participating organisations. This is a great opportunity for students to get involved in projects that matter, and to learn about code development which is bigger than the standard "one semester" programming project that they are usually exposed to at university.</p> <p>Some statistics:</p> <ul> <li>177 of 417 projects were accepted, which is a success rate of 42%.</li> <li>40 of the 177 project are accepted for the first time, which is a 23% proportion of new blood.</li> </ul> <p>These seem to be in the same ballpark as most other competitive schemes for obtaining funding. Perhaps there is some type of psychological "mean" which reviewers gravitate to when they are evaluating submissions. For example, consider that out of the <a href="http://google-opensource.blogspot.com.au/2012/04/record-number-of-student-applications.html">4258</a> students that applied for projects in 2012, <a href="http://google-opensource.blogspot.com.au/2012/04/students-announced-for-google-summer-of.html">1212</a> students got accepted, a rate of 28%.</p> <p>To the students out there, please get in touch with potential mentors before putting in your applications. You'd be surprised at how much it could improve your application!</p>Cheng Soon OngTue, 09 Apr 2013 00:01:00 -0000http://mloss.org/community/blog/2013/apr/09/gsoc-2013/