I’ve got a working title: The Woman Who Squashed Terrorists: When an Embassy gets Hacked – Chris Kubecka
Great talk where I was so intrigued that I forgot taking notes on my laptop. Maybe that’s also because haven’t got my coffee yet ;-). She talked about her passion: Cyber warfare. And through some fun examples and experience she had she made you aware of the importance all the little things have in the chain of security. Even an easy password for an email can have a huge impact on an embassy. A gift can contain bugs or small IT compartments but also the people close to you can experience involvement. So a short summary, but the talk was great and I can recommended it to you.
Unlikely allies: how HR can help build a security-first culture – Alison Eastaway
She was pulling people of the hall which was fun to witness. She continuously keeps the crowd in motion by lot of asking but in a fun way. She introduces herself and inmediately asks who works with the HR department. Why, because she thinks security starts with the people and HR knows people. Social Engineering for example is done by people, so what is the difference between HR and S.E. ? HR helps you to understand people for the organization where social engineering have a goal to obtain data through people knowledge.
The job of HR is to interact with people and are the first contact for the outside, so they are the most vulnerable for things like mail attachments and social engineering. How can HR and security work together? HR doesn’t focus on breaches to involve people but making it fun to get involved.
On the onboarding the number of the CTO is given for any possible questions, but also if someone leaves his laptop unattended then it needs to write cookies on it’s laptop.
So even with a wireless keyboard left in another room you can be cookified.
HR invites visitors and show them what they’ve did, demos about phishing. She also talked about organization and culture and what drives a culture, rewards, toleration and punishes.
Psychological security si the believe tat you can take the risk and you won’t be punished.
Looking dumb is something we should cultivate, we are ashamed to look dumb. But she wants to encourage take the risk of feel dumb. The ability to feel vulnerable is also e form of psychological security.
Do certain types of developers or teams write more secure code – Anita Damico
- Subtitle: Human factors in AppSec
Software vulns are major gateway for breaches. Some breaches remain undiscovered despite the many eyes that went on the code. And that just says it when the vuln is published and says nothing about when it was discovered. Again, a reference is made to the static analyzers which measure just 14% of all security weaknesses.
What if you as a human would hunt for vulnerabilities? Dev characteristics, team, char, when and where was the code written are what she calls human factors. Which can be divided in individual and group level factors and cultural. But also the Physical aspect, if people are fatigue or deaf this can make a difference in how people code.
How to measure such an aspect is for example looking at the time of commit and when is quite late in the evening you can assume the person was tired. The size of the organisation can also be part of human factors. Darpa has performed a research about the human factors that may occurs in open source and proprietary code.
This is done by measuring, but in this case by asking how many hours they’ve slept and how long did they have their headphones on during coding.
Is there difference when teams co-locate? We felt it was but the measurements say something else. It makes no difference.
And yes it came up that late night commits have more bugs then morning bugs. She refers to the circadian rhythm. She shows a chart with a graph about alertness which also visualizes the afternoon-dip among other dips. Where morning persons versus evening persons are more like characteristics. Developers unfocus apparently and focus on distribution and occurs when developers are modifying many other files, or many developers modify the same file.
Measure quality and security by the number of developers in a single file. The more developers work in a single file, the more risk of cumulative issues can be found.
Source code with 9+ developers worked on them are numerous times more likely to find vulnerabilities. Then the bystander effect occurs, everybody thinks someone else does it but eventually nobody does it at the end.
The interactive churn is based on lines of the commit changes the lines of code which other developers has last touched. A few numbers where given how bad a developer performs while for example haven’t got any sleep for 17-19 hours.
The wrap-up consists of emphasizing that study is needed for the effects of human factors on code and thus on the software developers produce.
The Security we Need: Designing Usable IoT Security – Damilare D. Fagbemi
After his introduction the presenter explicitly states he Is not speaking for intel . And starts the talk with a quote: Why do good users do bad things? He makes a reference to kids and the explain of dad’s just build cool tool to his kid. The kid isn’t interested in the how but in what it does and how can he play with it. He sees us developers as special forces not pop-stars.
Trying to secure your application to be as safe as possible for the end-user. He uses the metaphoric car with mechanical engineers to point ut that if you just let builders build a tool, the tool has lots’s of cool features but will probably be not secure. He comes with the 5 C’s five challenges:
Point 1, Security Intricacies don’t make sense to users, assuming the experts are the experts.
Point2, The user thinking that he is technical enough to do it himself. Security is not interesting for the user, the seat-belt law still isn’t practiced fully after 50 years. Are they aware of the dangers
Point 3, Usabel security is not demanded from vendors, its up to the developer.
Point 4, The barrier to necessary workflow are reasons to avoid implementing security. If this was made easier it’s easier to implement.
Point 5, The different view of the isecurity, from executive, architect to implennter and user.
It folluws up with the ten principles , as the first states which hat do you have, do you secure defaults only because lot of IoT devices are on for a long period of time. A network is not trusted, whatever you windows pc says about corporate network it is not a trusted network. , Authenticate is th next principle where he shows us where users are likely to adopt a certain behavior like pre-sahred keys. Select the level of security and advices users how to implement them as they usually come as pre-configured sets. Design to scale so it doesn’t matter how many devices are run.
Do not allow passive or transitive access.
Implement runtime anomaly detection and health checks.
Revoke previously granted authority.
Keep secrets secret. Most users are glad not to worry about secrets/passwords.
Monitor circumvention of security controls. If there’s something happening there is a reason why.
No More Whack-a-Mole: How to Find and Prevent Entire Classes of Security Vulnerabilities – Sam Lanning
Sam introduced himself and the fact that semmle is bought by github. He inmediately zoom sin on an CVE-2017-8046. Love it the way he is straight to the point. An CVE which involves spring-data. A timeline was shown about discovering the exploit, the patch and the exploit again because of a poor patch. After this timeline he showed a few bugs. A RCE in Apache Struts.
He states the problem of looking further, when a mistake is discovered look in the code for more of same mistake. This is called variant analysis. You can use your IDE for this and look into the codebase for similarities. But this takes a lot of time and effort and on top of it all can have human error.
The security engineers are out of balance versus the number of software engineers. Example given is 34 of Microsoft versus 10.00+ software engineers. But how do they do it? Automate variant analysis. There are a few tools who do this. Firefox has some tool which does variant analysis.
Make general query’s which can help you find multiple vulns.
Snyck has the application ZipSlip which Sam uses for his presentation, he explains how it works and gives some practical information like for example directory traversal.
The path which is used in Java for new File while unzipping is not normalised and can be by a simple line of code which checks if the target directory is equal to the destination directory. The workflow is not only to find the bug but also to find variants of that bug and fix them and patch them all at the same time. He also asks if there are organisations with a vulnerability response process.
He implicitly states variant analysis is not a good replacement for good security architecture, a way to avoid large refactors, not a way for exploit mitigation, a replacement for other security processes. And use and contribute to the shred knowledge/checks.
Making the web secure, by design ++ – Glenn ten Cate & Riccardo ten Cate
A short introduction includes how they work together as brothers and how he started with security, by coding auto-aiming bots in unreal tournament.
We then are guided through some processes in secure development with SAST and DAST and enumerating some tools. He then stated the question ‘ with different tooling by your choice, how much percentage will be covered?” Because the lack of context within the tools the percentage is actually quite low.
A human can increment a date for example because he knows the context around it. So automating the SAST and DAST steps in your pipeline for example with building a docker instance. You can have this centralised which is a big win. How to deal with false positives, the OWASPA benchmark project where a false positive ratio is presented.
The Defect Dojo, a project supported by OWASP which also gives you detailed information about the CVE if found and gives a fair percentage about the found vulnerabilities. The developers are the ones who can do something with this information and fix the vulnerabilities. An example given was XSS , can be fixed by certain design patterns or framework patches.
They’ve made a framework called SKF security knowledge framework where the concept is explained about design your security and not securing your design. The ASVS standards are explained with some examples. The question was risen who used the ASVS continuously, no one but they think they’ve got the solution with the SKF framework and they showed us a demo. They’ve shown some labs and they’ve made their tool more from the developers kind of perspective with code examples!
They also think it’s supports the security within projects because it costs less time to implement it because the information is easier to see. The demo shows us a dashboard with four items which has four buttons form where you can investigate or experience the OWASP aspects.
It is also possible to create your own checklist so you can assemble your own checklist above the OWASP ones. The checklist items can be copied to a ticketing system by choice like JIRA. How you can select the checks which apply to your project is to create a project and follow the wizard steps and a list of check is generated which you can copy to a ticketing system by choice. The generated list is also able to be exported in CSV format. The labs are available to download and run in a docker container.
Unfortunately I had to go after this talk due to other responsibilities, but I most certainly look up the remaining talks I wanted to see. As a developer I was surprised in a positive way in what I’ve got to see and hear versus what I expected. To bad to see it’s still a small world. This should be bigger! And especially among developers, my guess is that if I look into some senior dev code, a lot of the security aspects I’ve seen on this conference haven’t been implemented by the dev. I felt most welcome and got more excited about software security. I look forward to the next conference with security in mind!