Posted by elza_1111 1 day ago
It has everything. Any force push to hide ugly prototype code is kept forever which annoys me. I wish we were able to remove stuff from there but the only way to do it is to email support it seems?
Here it is for the test repo mentioned
Looking at some of my projects, it's entirely empty, or only has a few items, so I suspect it was introduced "recently" and doesn't have data from before then.
Picking https://github.com/jellyfin/jellyfin/activity?sort=ASC as a busy example, Activity page has no data prior to 7th March 2023. So it has existed for 2 of GitHub's 17 years of existence.
Not if you contact customer support and ask them to garbage collect your repo.
What I do when I accidentally push something I don’t want public:
- Force push;
- Immediately rotate if it’s something like a secret key;
- Contact customer support to gc the repo (and verify the commit is gone afterwards).
(Of course you should consider the damage done the moment you pushed it. The above steps are meant to minimize potential further damage.)
2. Even for rotatable secrets, "I don't think there is any potential further damage" rests on the assumption that the secret is 100% invalidated everywhere. What if there are obscure and/or neglected systems, possibly outside of your control, that still accept that secret? No system is bug-free. If I can take steps to minimize access to an invalidated secret, I will.
Reporter can sell their current house and move to another home as a workaround
Closing ticket as workaround provided.
Thanks for being a great team player!
If it's not possible to invalidate your compromised software secrets, I would argue that you have bigger and more urgent problems to fix. But fair enough: Deleting them from GitHub might reduce the impact in such cases.
I would also like to remind that a leaked AWS secret can cost 100Ks of $ to an organization. And AWS won't help you there.
It can literally break your company and get people unemployed, depending on the secret/saas.
No amount of internal review and coding standards and etc will catch all of these things. You can only hope that you build the muscle memory to catch most of them, and that muscle memory is forged through being punched in the face
Lastly, any pompous corporate developer making 200k a year or more who claims they've never shipped a vuln and that they write perfect code the first time is just a liar.
Everything you mentioned is security 101, widely known, and can be caught by standard tools. Shrugging that off as a learning experience does not really hold much water in a professional context.
The responsibility is on the programmer to learn and remember these things. Period, end of story. Just as smart pointers are a bandaid on a bigger problem with real consequences (memory fragmentation and cache misses), so too is a giga-linter that serves as permanent training wheels for so called programmers.
In my head, the people who accidentally share secrets are also the people who couldn't setup trufflehog with a precommit.
People who believe they know what they're doing get overconfident, move fast, and make mistakes. Seasoned woodworkers lose fingers. Experienced doctors lose patients to preventable mistakes. Senior developers wipe the prod database or make a commit they shouldn't.
https://hsph.harvard.edu/news/fall08checklist/
>In a study of 100 Michigan hospitals, he found that, 30 percent of the time, surgical teams skipped one of these five essential steps: washing hands; cleaning the site; draping the patient; donning surgical hat, gloves, and gown; and applying a sterile dressing. But after 15 months of using Pronovost’s simple checklist, the hospitals “cut their infection rate from 4 percent of cases to zero, saving 1,500 lives and nearly $200 million,”
I made shameful mistake of submitting private key (development one so harmless) only because it wasn’t gitignored and prehook script crashed without deleting it). More of a political/audit problem than a real one.
I guess I’m old enough to remember Murphy Laws and the one saying "safety system upon failure will bring protected system down first".
I guess it's hubris. I don't make stupid mistakes. You see it a lot in discussions around Rust.
Unfortunately, that is impossible: https://trufflesecurity.com/blog/anyone-can-access-deleted-a...
- enforce them on CI too; not useful for secrets but at least you're eventually alerted
- do not run tasks that take more than a second; I do not want my commit commands to not be instant.
- do not prevent bad code from being committed, just enforce formatting; running tests on pre-commit is ridiculous, imagine Word stopping you from saving a file until you fixed all your misspellings.
My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.
Pre-commit hooks should be much, much faster than most CI jobs; they should collectively run in less than a second if possible.
Also easier to enforce pre-commit, since it was done server side.
- commit secret in currently private repo
- 3 years later share / make public
- forget the secret is in the commit history, and still valid, (and relatedly, having long-lived secrets is less secure)
Sure that might not happen for you, but the chances increase dramatically if you make a habit of commiting secrets.
Always cycle credentials after an accident like committing them to source control. Do it immediately, you will forget later. Even if you are 100% sure the repo will never be more public, it is a good habit to form.
Example, there's an ICE reporting app now where people can anonymously report ICE sightings... but how anonymous is it really? Users report a location, that can be cross-referenced with location histories and quicky led back to an individual. There may be retaliation to users of this app if the spiral into authoritarianism in the US continues.
For now they're going to be making a lot of basic mistakes but eventually they'll grugq up and learn from people that are already used to dealing with the violence of their government.
https://docs.github.com/en/actions/how-tos/security-for-gith...
Never commit secrets for any reason.
Same for your vault. The vault might be encrypted, but at some point you have to give the keys to the vault.
Your secrets are not safe from someone if someone needs them to run your code.
This is true. I don't disagree with that or you're assessment of repo secrets.
My comment was in the context of the grandparent committing secrets to a private repo which is a bad practice (regardless of visibility). You could do that for tests, sure (I would suggestion creating random secrets for each test when you can), but then you're creating a bad habit. If you can't use random secrets for tests repo secrets would be acceptable, but I wouldn't use them beyond that.
For CI and deploys I would opt for some kind of secret manager. CI can be run on your own infrastructure, secret managers can be run on your own infrastructure, etc...
But somewhere in the stack secret(s) will be exposed to _someone_.
The other upside with environment variables is that they work across projects. Set & forget, assuming you memorized the name. Getting at tokens for OpenAI, AWS, GH, etc., is already a solved problem on my machine.
I understand why a lot of developers don't do this though. Especially on Windows, it takes a somewhat unpleasant # of clicks to get to the UI that manages these things. It's so much faster (relatively speaking) to paste the secret into your code. This kind of trivial laziness can really stack up on you if you aren't careful.
Last time I tried, the default suggestion was Cloud KMS (yeah), now there's some new secret manager that also looks annoying: https://stackoverflow.com/questions/58371905/how-to-handle-s...
Checkout to event, commit in clean state with prior log history, overlay the state after the elision and replace git repo?
When I had to retain log and elide state I did things like this in RCS. Getting date/time info right was tricky.
So I just hard coded the key. The key was rotated after the presentation.
Does not look very good on a repo.
It's interesting research, but will Truffle Security use the email addresses for lead gen or marketing purposes, like how they mined users' pingbacks from their XSS Hunter fork for stats?
https://portswigger.net/daily-swig/new-xss-hunter-host-truff...