-
-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static analysis #239
Comments
FYI I have no previous experience with explicit static analysis but I'm willing to work on implementing this if it people think it's worthwhile. I currently know of two open source static analysers, Splint and Infer.
Other suggestions welcome 😃 |
I'm not opposed to adding some static analysis. I don't have any experience with any FOSS linters for C, though. |
The Infer license is actually plain BSD (https://github.com/facebook/infer/blob/master/LICENSE) so should be OK in that regard I think. |
I don't know how valuable it is to confirm the example. The goal of exercism is to help people write better code, and most people never see the example. Would it be more useful to tie into make like we are planning on doing for memory analysis #169 . |
@Gamecock That could be a possibility... An optional, additional, make target for static analysis then? Seems there are lots of other FOSS C analysers to pick from: https://github.com/mre/awesome-static-analysis#cc, such as CppCheck |
I think that this is somewhat valuable. Not because static analyzing the code here is terribly useful (though I guess we could leverage it to also have another 'extra credit' checker alongside Valgrind), but because I want to have a simple to steal setup for Travis. I've been meaning to do that at some point (not necessarily here, but setting up a Travis setup with Cppcheck, the Clang tools (all their sanitizers, which is a superset of Valgrind AFAIK, clang-tidy for static analysis / style, clang-format for formatting, etc), maybe Valgrind if that's not covered by Clang, etc. I haven't heard of Infer, but at a glance it looks like it may be worth looking into. |
Looking at CppCheck a simple implementaion for CI would add something like the following to the if cppcheck --std=c99 --enable=portability,warning,performance,style ./src/example.* | grep error ; then
exit 1
fi As an addition to the makefile it should be something similarly simple CPPCFLAGS = --std=c99
CPPCFLAGS += --enable=portability,warning,performance,style
cppcheck:
@cppcheck $(CPPCFLAGS) src/$(EXERCISE_NAME).c src/$(EXERCISE_NAME).h |
SonarQube might also prove interesting - they are free for open source too 😃 |
you can use codacy to automatically check the entire repo , this also includes checking new pull request automatically and thus notifies everyone about the code quality 🍰 🐱 🐶 EDIT : codacy is also a part of github so integration is simple! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I've recently started a new job and not had as much time to look at this as I'd like. It may be some time but I would still like to play with this and get it working at some stage. If anyone else is interested feel free to contribute suggestions or PRs! |
+1. I think this is a great idea. In addition to the obvious "It checks the code", even if it's not exposed straight away it think it likely to be useful for running on student's submissions as a starting point for feedback. I think, especially as there is the potential for learners to want to copy the behaviour, having something stand-alone and mature would be a good thing. Putting this into make is as you describe, however, I would use:
instead of piping to grep for error. If you have a perf warning in a function handles
... Depending on the version of cmake (3.10 or above) in use, this is really simple in cmake.
See:
Final note: overzealous linters put good developers off, I would be reluctant to make PRs an automatic no because the linter isn't happy until you are sure it's making good decisions. |
Should we add static analysis to the CI tasks to analyse each implementation?
As it is all the exercise implementations work just fine -- according to the tests that have been written. Static analysis would add a second layer of "correctness" checking, for the current exercises and also for any exercises added into the future. So far I don't think there is any specific need for this it just occurred to me that it might be a nice-to-have thing. Downsides might be potentially increased maintenance in the case of any major change to the track structure.
Thoughts?
The text was updated successfully, but these errors were encountered: