Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: stardate: Combining dating methods for better stellar ages #1469

Closed
36 tasks done
whedon opened this issue May 22, 2019 · 59 comments
Closed
36 tasks done

[REVIEW]: stardate: Combining dating methods for better stellar ages #1469

whedon opened this issue May 22, 2019 · 59 comments
Assignees
Labels
AAS Papers being published together with a AAS submission accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review

Comments

@whedon
Copy link

whedon commented May 22, 2019

Submitting author: @RuthAngus (Ruth Angus)
Repository: https://github.com/RuthAngus/stardate
Version: v0.1.0
Editor: @arfon
Reviewer: @mattpitkin, @nespinoza
Archive: 10.5281/zenodo.3444905

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/ee2bbcd6b8fd88492d60f2fe77f4fcdd"><img src="http://joss.theoj.org/papers/ee2bbcd6b8fd88492d60f2fe77f4fcdd/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/ee2bbcd6b8fd88492d60f2fe77f4fcdd/status.svg)](http://joss.theoj.org/papers/ee2bbcd6b8fd88492d60f2fe77f4fcdd)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@mattpitkin & @nespinoza, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.

Please try and complete your review in the next two weeks

Review checklist for @mattpitkin

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.1.0)?
  • Authorship: Has the submitting author (@RuthAngus) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @nespinoza

Conflict of interest

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the repository url?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Version: Does the release version given match the GitHub release (v0.1.0)?
  • Authorship: Has the submitting author (@RuthAngus) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Authors: Does the paper.md file include a list of authors with their affiliations?
  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
@whedon
Copy link
Author

whedon commented May 22, 2019

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @mattpitkin, it looks like you're currently assigned as the reviewer for this paper 🎉.

⭐ Important ⭐

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

@whedon
Copy link
Author

whedon commented May 22, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented May 22, 2019

@arfon
Copy link
Member

arfon commented May 22, 2019

@mattpitkin, @nespinoza - please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Any questions/concerns please let me know.

@mattpitkin
Copy link

Hi @RuthAngus, one of the things we're asked to check is:

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Would you be able to add a few sentences to the README file and main documentation page telling people how they should contribute to the project? Just something along the lines of saying that if they want to contribute they can submit issues or pull requests from the github repo.

@RuthAngus
Copy link

Absolutely! I have added a sentence to the bottom of the README.

@mattpitkin
Copy link

@RuthAngus thanks for adding this. I noticed that in the raw version of the README.rst file there's some info on installing and an example of running the code, but these are commented out. Even with the link to the documentation that exists I think it would still be useful to have that info visible. Could you uncomment them?

@RuthAngus
Copy link

@mattpitkin good suggestion -- I have now done so!

@mattpitkin
Copy link

@RuthAngus I'm working through the very nice example here https://stardate.readthedocs.io/en/latest/tutorials/Tutorial.html#. When I get to the code block after the sentence "If you know the extinction you can pass that to the Star object too:" it throws an error because Av and Av_err are not defined. I'm assuming this Av is the extinction value (and it's associated error) that previous sentence mentions, but this should be made explicit in the sentence, or the values should be removed from the example, so that it will run without an error if people have been following step-by-step.

@mattpitkin
Copy link

@mattpitkin
Copy link

@RuthAngus In the paper it's probably worth mentioning that the fitting uses the MCMC software emcee to perform the fitting (with a reference) and therefore returns samples from the parameters' posterior probability distributions.

Related to this, is there any way to set what priors you can use for any of the fitted parameters?

@mattpitkin
Copy link

@RuthAngus are there any plans to add a test suite and continuous integration to the package? I don't think this is a requirement for software that is published in JOSS (@arfon?), but it would be nice and might be useful for future development?

@mattpitkin
Copy link

@RuthAngus @arfon Once the above questions have been answered I'm happy to sign this off as reviewed.

@arfon
Copy link
Member

arfon commented May 26, 2019

@RuthAngus are there any plans to add a test suite and continuous integration to the package? I don't think this is a requirement for software that is published in JOSS (@arfon?), but it would be nice and might be useful for future development?

The JOSS guidelines on testing require at least the following:

Documented manual steps that can be followed to objectively check the expected functionality of the software (e.g. a sample input file to assert behaviour)

We definitely prefer automated tests (ideally hooked up to Travis/Circle CI etc too) but don't require them. It is required though that there are documented procedures that the reviewer can follow to objectively check that the software is doing what is expected.

@mattpitkin - does that help?

@mattpitkin
Copy link

@arfon there is a very nice example and tutorial of using the code, but I think from what you say this may not quite be enough. As the code is doing stochastic sampling, the example given won't necessarily produce identical results each time.

@RuthAngus here's what I'd suggest by way of a test, which may be useful. First, make the star class, or its run_mcmc() method accept a random state variable of the form required by emcee. With this you'll be able to seed the sampler to reproduce exactly the same results each time. Then run the code for a particular star that you've analysed before, using a random state initialised with a particular seed number, and probably starting the sampler very close the already known maximum posterior values, generating a few hundred samples. Save these samples and add then into the repo in a test directory. Also, add the script that you've just used to produce the samples to the test directory, but change it so that the samples get output to a different file. Also, have the script read in the previously generated samples file and have check that the new samples it will produce are identical to the already existing samples (you could compare each sample, or just compare statistics like the mean, median and credible intervals to make sure they are the same). Does that sound doable?

@RuthAngus
Copy link

Hi @mattpitkin, thanks for these comments!

I have now fixed the Av bug in the tutorial and added emcee to the list of dependencies. Thanks for running through the tutorial! There is not currently any support for manually adjusting the priors, but that's a feature I'd love to add in future. Would you mind if I tabled this for the next version? The priors that I'm using are described in the appendix of the submitted AAS paper that accompanies this software. I have also added a mention and reference to emcee in the JOSS paper, thanks for suggesting that!

I do currently have some tests in the /stardate/tests directory but I don't provide instructions on how to run these. Should I add some instructions to the docs on how to run the tests with pytest or something? The test in integration_tests.py does something similar to what you suggest. It does use a fixed random number seed, but I had previously hard-coded this into the function that runs emcee, which doesn't seem like the most sensible choice :-). I have now added an argument to set the seed in the fit() function. This test doesn't save samples like you suggest, which is a great idea, but that would fail each time I changed something in the likelihood function or prior, so I'd need to regenerate the sample file each time I did that. Do you think this is worthwhile doing regardless? I also agree that adding continuous integration would be really nice. I've never set this up before and I'm not that familiar with how, e.g. Travis works, but would it be a problem that the big file of MIST model isochrones needs to be downloaded in order to run the tests?

@RuthAngus
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented May 29, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented May 29, 2019

@mattpitkin
Copy link

@RuthAngus here are some replies inline.

I have now fixed the Av bug in the tutorial and added emcee to the list of dependencies.

Great, thanks. I think emcee is still missing in the installation requirements described in the README file, but otherwise its looking good.

I do currently have some tests in the /stardate/tests directory but I don't provide instructions on how to run these. Should I add some instructions to the docs on how to run the tests with pytest or something? The test in integration_tests.py does something similar to what you suggest.

Excellent. I'd missed that this test was there, but it looks like the sort of thing that is required. Adding instructions on how to run it with pytest would be very useful.

It does use a fixed random number seed, but I had previously hard-coded this into the function that runs emcee, which doesn't seem like the most sensible choice :-). I have now added an argument to set the seed in the fit() function.

Great 😃

This test doesn't save samples like you suggest, which is a great idea, but that would fail each time I changed something in the likelihood function or prior, so I'd need to regenerate the sample file each time I did that.

My suggestion of having a saved set of samples in the test suite and doing a direct comparison between those and a new set of samples may be overkill. Having a broader consistency test like you do in your current integration_tests.py should be enough (and is enough for this review I think). However, in general it would be good to have some fixed exact comparison - as you say, you may change the likelihood/prior in the future, but when doing this it would be best to fully maintain backwards compatibility (unless you find a bug!), and a fixed test for a specific set of data and a specific likelihood function and prior, would help ensure that this backwards compatibility is maintained. I don't think this is a requirement for this review though.

I also agree that adding continuous integration would be really nice. I've never set this up before and I'm not that familiar with how, e.g. Travis works, but would it be a problem that the big file of MIST model isochrones needs to be downloaded in order to run the tests?

It's relatively easy to set up Travis CI integration - an example of a .travis.yml file that I've used in one of my packages for this is here. Once you have that in your repo you just have to register the project on the travis website. It think is should be able to handle having to download the MIST model isochrones file each time - often it has to download a large Docker image file, or install some large package, each time anyway (in fact you could create your own Docker image for it to use that already includes the file...)

@RuthAngus
Copy link

@mattpitkin I've added emcee to the README installation guide now, thanks for spotting that. I have also added some basic test instructions to the docs. Does this look alright to you? Thanks for providing an example of a .travis.yml! I have CI working now and I'm really pleased about this! Let me know if you have any other suggestions for improvements.

@mattpitkin
Copy link

@RuthAngus Great, that all looks good to me. I've ticked of all my check boxes and am happy to sign-off the review.

@arfon arfon added the AAS Papers being published together with a AAS submission label May 31, 2019
@arfon
Copy link
Member

arfon commented May 31, 2019

👋 @nespinoza - how are you getting on with your review?

@nespinoza
Copy link

Hi @RuthAngus,

Thanks first of all for all the work you've already done with @mattpitkin's part of the review --- I've been keeping an eye on his comments in order to not bring up the same ideas/comments again in my review.

There's a couple of things that I think should be stated in the stardate documentation with which I actually had trouble with; this is what prevents me right now from ticking the Installation instructions part, but I think they are fairly easy to address:

  1. Because isochrones only works now on python3, stardate also only works on python3. As a hard-core python2.7 user (and yes, I know --- we have to evolve! On it), I think it is important that you state at the beginning of the docs that stardate will only work with python3 because of this reason.

  2. On a similar note, it appears that when you do a pip install emcee, in several systems I have tried this the default is to install emcee 2.2.1, which does not include the emcee.backend module (which is only introduced in emcee version 3), which is needed for stardate to work (line 176 in stardate.py). I think this really should be a shout-out to @dfm, but in the meantime perhaps it should be stated that stardate also only works on emcee versions 3 and up in the docs as well.

Other than the above, I believe this is a fantastic package, kudos to @RuthAngus et al. for it --- looking forward to using it in my own research!

@arfon arfon added the accepted label Jun 12, 2019
@crawfordsm
Copy link

The correspond AAS DOI is 10.3847/1538-3881/ab3c53

@arfon
Copy link
Member

arfon commented Aug 30, 2019

Thanks @crawfordsm. @RuthAngus - can you merge this PR which adds the ApJ link to the paper? RuthAngus/stardate#8.

Also, I forgot to check, is it ApJ you are publishing the accompanying paper in? If not, just modify the name of the journal on this line.

@arfon arfon removed the paused label Aug 30, 2019
@RuthAngus
Copy link

Hi @arfon and @crawfordsm, it's an AJ paper, so I added the DOI and journal lines to the paper file without merging the PR, hope that's ok!

@RuthAngus
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 3, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 3, 2019

@arfon
Copy link
Member

arfon commented Sep 3, 2019

@RuthAngus - do you think you could update the reference to 'Angus et al (submitted)' to point to the AJ paper (https://doi.org/10.3847/1538-3881/ab3c53)?

After that, could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:

  • The title of the archive is the same as the JOSS paper title
  • That the authors of the archive are the same as the JOSS paper authors

I can then move forward with accepting the submission.

@ooo
Copy link

ooo bot commented Sep 12, 2019

@RuthAngus
Copy link

@whedon generate pdf

@whedon
Copy link
Author

whedon commented Sep 16, 2019

Attempting PDF compilation. Reticulating splines etc...

@whedon
Copy link
Author

whedon commented Sep 16, 2019

@RuthAngus
Copy link

Hi @arfon, I have updated the reference in the paper and created a new release! The Zenodo DOI is: 10.5281/zenodo.3444905

@arfon
Copy link
Member

arfon commented Sep 18, 2019

@whedon set 10.5281/zenodo.3444905 as archive

@whedon
Copy link
Author

whedon commented Sep 18, 2019

OK. 10.5281/zenodo.3444905 is the archive.

@arfon
Copy link
Member

arfon commented Sep 18, 2019

@whedon accept

@whedon
Copy link
Author

whedon commented Sep 18, 2019

Attempting dry run of processing paper acceptance...

@whedon
Copy link
Author

whedon commented Sep 18, 2019

Check final proof 👉 openjournals/joss-papers#964

If the paper PDF and Crossref deposit XML look good in openjournals/joss-papers#964, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.

@whedon accept deposit=true

@arfon
Copy link
Member

arfon commented Sep 18, 2019

@whedon accept deposit=true

@whedon
Copy link
Author

whedon commented Sep 18, 2019

Doing it live! Attempting automated processing of paper acceptance...

@whedon
Copy link
Author

whedon commented Sep 18, 2019

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

@whedon
Copy link
Author

whedon commented Sep 18, 2019

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.01469 joss-papers#965
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01469
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? notify your editorial technical team...

@RuthAngus
Copy link

Woohoo! Thank you!

@arfon
Copy link
Member

arfon commented Sep 18, 2019

@mattpitkin, @nespinoza - many thanks for your reviews here ✨

@RuthAngus - your paper is now accepted into JOSS ⚡🚀💥

@arfon arfon closed this as completed Sep 18, 2019
@whedon
Copy link
Author

whedon commented Sep 18, 2019

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01469/status.svg)](https://doi.org/10.21105/joss.01469)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01469">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01469/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01469/status.svg
   :target: https://doi.org/10.21105/joss.01469

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@whedon whedon added published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. labels Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AAS Papers being published together with a AAS submission accepted published Papers published in JOSS recommend-accept Papers recommended for acceptance in JOSS. review
Projects
None yet
Development

No branches or pull requests

6 participants