AI Generated Summary
This is the first episode of Null Charcha, a panel discussion featuring Anant Shrivastava and Prashant discussing “Security Then and Now” - comparing security practices from 10+ years ago to the present day.
Panelists
- Anant Shrivastava: Close to 15 years experience, trainer, pen tester, manager/director, maintains open source projects (Android-focused, code review), maintains hackingarchivesofindia.com portal tracking Indians in hacking space
- Prashant: Known as “corrupt” in community, ran NULL jobs portal for 10 years, started from defensive side, moved to offensive security through NULL, pen tester, trainer, currently director
- Personal interests: Anant - data hoarder (20TB), anime binge watcher (One Piece episode 352); Prashant - likes hikes and bushwalks
Key Topics Discussed
Security Then vs Now:
2000-2010 Era:
- Hacking was fun: People were doing it because it was exciting, wanted to experiment
- Finding vulnerabilities: Way easier, simpler - organizations struggling to identify what security professionals were doing
- Straightforward attacks: What used to be straightforward attack in 2010 might now be thwarted by browser, application framework, then application code
2010-2022 Era:
- Established job opportunity: Has become commercial enterprise, money-making opportunity
- People here for money: Not because they wanted to be here - changes perspective
- Nine-to-five: For many, infosec is effectively nine-to-five earning opportunity
- Person excited vs day job: Person in exciting place might spend 2 days exploring landscape vs person earning day job just finishing and moving on
- Default protections: Things more complicated, in-depth - default protections have come into place
Entry Paths:
- Earlier: Pathway from sysadmin or other role, eventually moved into security
- Now: Can start directly in security from college
- Problem: People starting directly at security level - gaps in knowledge/skill set because never learned from ground up
- Reality: Still tough for freshers to get into security even today, most still follow path where sysadmins pivot to security
- Certifications: Comparatively easier now - few certifications, some companies might get you in job
Tools and Automation:
Scanners:
- Always meant to be aids: Not replacement for manual pen testing
- Evolved: More useful now - earlier scanners were person trying to automate, fewer plugins/checks
- Still identify low-hanging fruits: But no replacement for manual testing for business logic flaws
- Fringes then and now: Some people happy taking scanner output, saying “done VAPT” - only VA part done, compliance check sorted
- Organizations looking to mitigate: Still rely on manual approach
Manual Testing:
- More prominent now: But was prominent before also
- Earlier: People doing manual, then writing automations around it
- Now: More automations in place, people have more time to focus on stuff needing manual attention
- Business logic: Manual will always be there because business logic issues
- Till security is art: Till information security is considered art, human required
- If/when move to science: Humans of higher caliber required, most could be automated
Bad Actors:
Evolution:
- More sophisticated attacks: Bad actors have gone from then till now
- Conti malware playbook: Accidentally released online - much more structured playbook compared to large number of pen testing organizations
- Incentive structure: Bad actors have far bigger incentive to do things right - going to earn big chunk of money, punishment if anything goes wrong
- Much more organized: Much more structured compared to person doing VA/PT
- Time interval: VA/PT there for fixed time interval, can throw ball on other side saying “time constrained”
- Bad actors: Have embraced every single thing you can think of
Defense Evolving:
- Defense evolving because offense evolved: Offense is automated, offense has taken up part of doing deed in shorter span of time
- Defense has to evolve: To counter those offenses
- Both sides complementing: Each other
- ODays dropped: Within few minutes see attacks going on - defense still not even got update saying “audit dropped”
- Cat and mouse game: One team lagging, other one pulls along
Logging and Monitoring:
Logging:
- Problem solved: Now have access to so many resources - collecting logs, performing analysis much easier
- Still lacking: Context behind logs - have to correlate, piece together what happened based on limited information
- Logging predominantly solved: Resources, tools available (AlienVault, ELK stack) - problem is where to store so much logs
- Logs too much: Person generating 1TB logs per day - 180 day retention = 180TB logs
- Problem: Not that logs not there, not that logs can’t be connected - where do you store them, processing comes way after
Monitoring:
- Context piece still problem: Uniformity of logs, sync between logs
- Government directive: Asking organizations to sync time clocks - organizations might have different clock set, servers not syncing with NTP servers
- 5am attack might be 5pm: In Indian Standard Time
- PII in logs: Large amount of data being stored - becoming very common to have PII or stuff that should not be stored in logs getting stored
- Application crash dump: Contains xyz details - those sort of problems people trying to solve
Mindset Change:
- Attack happening/will happen: Mindset converted that attack is happening, attack will happen, you will get hacked, or you are hacked already and don’t know
- Security before attack: Less interesting - knowing when attack happened, what impact more requirement
- Have to assume: Attacks might happen - in that context logging/monitoring plays important
Defensive Evolution:
Frameworks:
- Security by default: Standard functions marked with deprecation warning or clear warning - should use other function with filters by default
- Default functions have filters: Insecure function available if want to risk going insecure way
- By default frameworks providing security
OS Level:
- Android/mobile devices: Don’t even give by default root level access to device owner - even though owner, still not master of device
- Limits attack surface: Way lower
- Desktop protections: Memory safe executions, avoiding use after free, other cookie canaries being added
- Windows, Linux, Mac: Everywhere aim is to thwart attacker from getting easy execution
Automation:
- Defensive side automation: Blue team utilizing defense automation to defend systems
- Definitely helped
Broken Access Control:
Why Always There:
- OWASP Top 10: Broken access control always there since 2003
- Not just web: Infrastructure, AD side, other technologies - always major issue
- Balance: Security always balance between security itself and convenience of user
- Make system difficult: User will find bypasses around security controls
- Balance not found: Properly - leads to different problems like bypassing authentication
Amalgamation:
- Multiple issues: Broken access control is amalgamation of multiple issues (like injection flaws - SQL, XSS, XML, LDAP)
- Identity and access management: Critical piece - no one wants to take ownership, that’s where everything goes wrong
- Azure AD environment: One user permission incorrectly given can lead to catastrophe
- Cloud environment: Shadow admins - not admin on face but can become admin or make someone else admin because of specific permission
- Easy to screw up: Very difficult to identify where you went wrong
- Automation gap: Still big gap - not easy to automatically detect something went wrong
- Right call at one point: Later in lifecycle something else changed, decision now causing problem - exploitable situation
- Frameworks trying to solve: But still large gap needs to be filled
Funny Vulnerability Stories:
Prashant’s Story:
- Client very hip: In kickoff meeting saying “got all security controls in place”
- Just tried payloads: In login page - there was XSS injection there
- That still happens: Sometimes
Anant’s Stories:
- Vendor web applications: 2016-17 time frame - if certain vendor created web application, would try blind SQL injection on login page, have access to entire database
- Big application space: Client had applications for entire European region and Asian region - every country had own dedicated websites
- Testing one website: Given blank slate, started with one website
- Forgot to block SVN: Repository - copied over whole .SVN, forgot to block it
- Got usernames: From .SVN folder config file - contains usernames used to commit, URL of SVN repository
- SVN repository publicly accessible: Username was the password
- Got access: To entire SVN repository - about 50 different repositories for each country
- Dropped PHP file: Shell was live in like 5 minutes - had whole CI/CD automation, if file added to source code automatically gets deployed
- Within hour: Had shells on each and every website - different servers, different AWS accounts, different setups
- How deep to go: Do we want to stop at one point or keep going in
Other Stories:
- AD restrictions, AV restrictions: Try figure out bypass around them, running around trying figure out what works
- Pivoting across network: See what clicks where
- Nessus scan: Started, took 15 minutes, looking at nmap output - saw 8080 open, Tomcat there
- 45 minutes done: Inside Tomcat because of default credentials, got shell, credentials dumped, moved to other machines, common passwords
- Within one hour: Whole pen test done - were AD enterprise admin
- Fun to see: But from defense point of view - why mistakes happening
- Silly scenario: Exploited commonly known vulnerability, asked “why still unpatched?”
- Response: “Give me two minutes, I’ll show you I patched it”
- Problem: Change control not allowing restart - patch deployed but couldn’t restart, patch not in effect
- Ground reality: Company can’t do anything on first month of quarter (critical), last month of quarter (critical), first week of month (critical), last week of month (critical)
- Effectively: Two into four = eight weeks of time where can do any change on environment
- Defense lives with: Need to serve business, business needs to function - balance has to be maintained
Open Source:
Evolution:
- Earlier: Open source code was big no-no
- Changed: Today proprietary softwares have open source code in them
- Why commercial software required: Open source is somebody doing work, making available to world - proprietary specifically designed solving exact problem company requires, with additional support
- Additional support: Never there or generally not easily available for open source component
Open Source Term:
- Earlier: “I am writing something, want to show world - if you like it contribute, if don’t like it fine, if feel going wrong direction submit patch or fork it”
- Licensing: Very restricted, very few licenses, very clear about freedom
- Basic tenets: Freedom of access to code, freedom of tinkering, freedom of manipulating
- Now: Effectively way of building resumes - need 4-5 different open source projects for good job opportunity
- Corporates: Show “we are not bad players, see we open source stuff”
- Point: Code available, want to do something with it your call, we are going to do things our way
- Other side: “We made something fantastic, here it is, contribute as much as you can so later can take contributions and make proprietary code”
- Checks and balances: In place but pretty much that’s way corporates look at open source now
Open Source in Corporate:
- Bigger team: More complicated to create even tiniest piece of software
- Relying on: Code written by smaller teams - smallest entity is single person (open source code)
- Reduce time to market: Relying on open source code as starting point
- Cascading dependencies: Building something, want to use one module (authentication) - use OAuth module, have to write everything from scratch
- Write from scratch: Conventional knowledge - when write everything from scratch end up causing more problems
- Open source components: By virtue of being open source more comfortable using other open source components
- Not third party: Talking about fourth party, fifth party, up to eleventh, twentieth, nth party dependencies
- Everyone using: Each other’s open source components
- Very few paying attention: To what they’re using - where SBOM directive mandate came from
- Third party we don’t care: Whatever version is there is there, can’t do anything about it, need to inform third party
- Source code reviews: People would refrain from looking at third parties - that’s problem
- Composition: Web softwares - about 80-20 (80% open source components not written yourself, 20% what you are writing)
- Desktop software: Much more proprietary and hidden - players still use own frameworks, own stuff
- Flutter, Electron: Things going in same direction - desktops relying on open source components
- Same problem: 70-80% of code from open source world
- From days: “I don’t want open source code in codebase because license” to “can we put barriers, use API, ensure GPL2 not GPL3, then okay using it”
- People evolved: Found ways to bypass restrictions - everyone gung-ho about open source in corporate, don’t see it going anywhere, see it growing more
SBOM (Software Bill of Materials):
- Not issue per se: More of “knowing your own” - need to know what is there in environment
- Existed for long time: A lot of people said “this is silver bullet”, prominent people debunked - SBOM not going to solve all problems
- SBOM is first step: Effectively knowing what you have - like inventory
- Inventory never accurate: Just like SBOM might not be accurate most of time, but gives indicator around what is there
- Journey starts: From SBOM - not the end, should not be solution to problem, just start point
- Once know what there: Need to start looking at attack surface reduction, version upgrades, why are we using this
- npm example:
is-evenmodule,is-oddmodule - one depends on other, thousands depending on this one piece - Till dependencies there: People not realizing “to save two lines of code I have added whole module, module owner as trusted entity” - things going to be tricky
- SBOM just start: That’s where you start with journey
SBOM Effectiveness:
- Effective if: Every entity in line is maintaining SBOM - everyone gets cascaded
- If no obfuscation: Easy to automatically identify second or third level dependencies
- npm done good job: If installing npm modules, has further dependencies down line, npm system keeps track of all dependencies - get list of all dependencies (not just direct but second or third level)
- Holistic idea: What kind of packages using
- Problem gets complicated: When have open source component depending on open source component but depending on component written in different language
- npm component: Depending on something written in Go, which depends on something written in Python, which depends on something written in Perl
- Multiverse of madness: Not just one way of tracking, different ways of identifying modules going to be there
- That’s where problems: Can come up, people still facing, trying to solve
Operating Systems:
OS Does Not Define Hacker:
- Modern OSes interlinked: Does not matter which OS you use to hack
- Windows: Have Linux subsystem for Linux
- Linux: Have all different tools and capabilities needed
- Cross-platform tools: Majority of tools pen testers use are cross-platform (Windows and Linux)
- Don’t get stuck: Should know basics of all of them
- Corporate: Windows still most relevant OS from end user perspective, but when comes to servers Linux is uncrowned king
- Depending on what attacking: Going to come across all of them - why focus on specific OS
- Use what works best: In current scenario and get done with it
Scenarios:
- Interesting tool: Allows hack into Azure systems - Windows only, works on PowerShell
- PowerShell on Linux/Mac: Not able to handle it - need Windows machine
- If Windows knowledge: “I don’t know how to start Windows or how to start terminal” - that’s bad as pen tester
- Dealing with lot of systems: As pen tester, auditor, forensic investigator - dealing with lot of systems, esoteric environments
- Important: Know how to deal with these systems at basic level
- Extend: Not just operating system - even to tool sets (vi vs emacs, sublime vs visual studio code)
- Handle tool: That is in front of you at very basic capacity - should not become bottleneck
- After pandemic: People stuck up - “I can’t sit on this system, need my mouse, my keyboard” - that should not be case
- By nature of job: More like tenants on someone else’s system than doing something on own systems - have to be flexible
Blurring Boundaries:
- Windows machine: WSL2 gives nearly replica of Linux kernel environment
- Mac OS: Anyways because of nature of how Mac and Unix/Linux interrelated, could run more softwares from Unix/Linux background
- Apple M1/ARM: Moving into ARM territory, getting parallels into Android and iOS world
- Linux: Have been ARM system, have been x86 system
- Mac blurring lines: On ARM aspects
- Windows blurring lines: On x86, Windows and Linux kind of systems
- Does not matter: Which system in front of you - should be able to deal with it, move forward
- Should not be bottleneck: Just facilitator to get to command prompt, get to actual application
- IoT hacking: When talking about IoT environments, will be restricted with what can do within different environments
- For most: Linux would be ideal environment - tap, dissect, do bunch of things
- For other environments: Might simply be too big deal to get emulator running on Linux - might just be one click away on Windows
- Don’t be stuck: On environment, rather focus on getting things done
Security Research:
Term:
- Niche term: People used to define themselves
- Everyone researching: In order to do jobs, in order to attack any application as pentester
- Level of research: Different - people who work in Project Zero at Google, entire job is find zero days, disclose bugs
- Some researchers: Focus on specific type of library, go into whole lot of depth
- Term hasn’t changed: That much - just people now self-coined it themselves instead of being given to them
Research Difficulty:
- Term research: Actually researching - lot of people take on face value
- Big because: So much content available now compared to initial days
- Initial days: Someone would write something based on own experience
- Now: Lot of people writing about own experiences
- People like me and Prashant: To be blamed - started putting this as barometer of good researcher, that you would have your own content
- Kick-started: Everyone started putting out own content
- Overabundance of content: Same thing explained in 20 different directions by 30 different people
- Not difficult: To actually have content out in public
- Difficult: To identify what is good, what is bad
Research Then vs Now:
- Old days: “Let me search something, I am not going to find what looking for, will have to make do with whatever can find”
- Now: “I found 200 articles, let me figure out what are they actually talking about, may end up getting that one thing after reading 200 articles because all nudging towards one point but not actually talking about that one point”
- Research now more difficult: Because there is more vs content than actual content needed
- Not BS content: People writing BS content to gain popularity
- Corporate world, SEO world: Realized this kind of content gives better market value
- People writing content: Nowhere related to community, more related to environment - putting content out so SEO brings more traffic
- Don’t care: What you do after read that content - just there to grab traffic, show ads, earn few cents
Quality Research:
- Project Zero: People doing deep dive technical research
- Spending hours, days, months: Trying figure out solution to problem - that’s quality research
- People writing anything: Not saying don’t do that - saying do that as much as can, more good content going to be there
- If more people genuinely interested: Write about it, more genuine content going to be there
- Problem increased: For researchers - content online which was base of how should go about doing it has increased manifold
Example:
- Want to do X: Could be “I want to test if certain software works in this particular manner”
- Go on internet: Start searching “what happens if this happens”
- Get 50 articles: Supporting conclusion, 50 articles disagreeing with conclusion
- In state: “This might be right, that might be right, should I do it myself or read and understand pitfalls”
- Depending on complexity: If IoT device, might prefer reading arguments rather than doing it because setting up environment might take more time
- Wasting time: In both senses - half not going to be right things because people made assumptions and written articles
- Researcher’s job: More difficult now
- Exciting place: To be definitely
Advice for Quality Research:
- Way suggest everyone: Especially freshers or people starting up
- One: Write - need to have habit of writing
- Two: Explore what is available right now, find out what people are not focusing on, then deep dive on that
- Anything people focusing more on: Area already being explored by n number of people - chances of getting something unique less
- Looking at something ignored: People not looking at - high chances
- Example: James Kettle (Albinowax) - every year comes out with interesting research, everyone in web application space gaga over it
- All he is doing: Looking at HTTP headers - has not even reached HTTP body, still dealing with HTTP headers, just deep diving into it where no one else is looking at
- Key: Look at something which no one else is looking at, go big
- Be patient: Lot of times what going to find is something already someone else spotted - that’s fine
- Bug bounty hunters: If found duplicates, you’re on right track - at least found something valuable to be marked as duplicate
- Duplicate is fine: Finding something anyone already discovered is fine - keep going, keep looking at things that are areas people not exploring much
Future of Security:
Web3:
- Term evolved: Somewhere - knew about web3 term back in 2014-15 time frame, but that was not web3 we know of right now
- Web3 at that time: About semantic web - web environment where machines can talk with machines, make sense of content
- Semantic web: Got lost, web3 term got lost, stuck with web2 (consumer-producer setup)
- 2018: Web3 started getting back into traction with cryptocurrency and blockchain space
- Within web3: Bunch of different interesting things happening
- Metaverse concept: Coming in - allowing people to sit on homes, do lot of things online
- Blockchain as concept: Being touted too much
- Cryptocurrencies: Being spearheaded
- When new technology comes out: Fanatics, people there to make money, people there because incentive to be made, very small set genuinely interested in technology space
- Most people here: Not about technology - if start talking about “how does IPFS work” or “how do we store, how do we ensure encryption with IPFS over decentralized platform” - people will be like “yeah you deal with that, tell me which coin going to rise tomorrow”
- Sad part: About web3 at this point
- Move towards tech aspects: Going to be interesting
- Cryptocurrency aspects: More of speculative game - hardly few currencies dealing with issues or concerns of world, most there because needed token to get money from other people (raising fund without giving equity by giving tokens)
Blockchain:
- Lot happening: Within blockchain space
- Layman’s term: Ledger which is tamper-proof (that’s aim)
- Banks implementing: Blockchains - trying do transactions over blockchains, immutable records maintained
- Slow, not fast: But gives immutability which important in lot of aspects
Metaverse:
- Bringing in: Lots of new stuff
Web3 Security:
- Interesting space: To look at from security standpoint
- Look at journey: Desktops and servers, then mobile space
- Mobile space: First six years, kind of bugs found were bugs already found in desktop space and patched - mobile just rediscovering them
- Same happening with web3: Kind of bugs being found right now are things other softwares have already fixed, already worked on
- Global lock: To avoid time of check time of use kind of issues - very common in programming languages nowadays
- Solidity: Language for smart contracts - slowly getting into that space where “oh yeah time of check time of use is bigger problem, need to get into lock state, have precautions against it”
- Smart contracts: People more interested towards source code analysis - would not want to do source code analysis because cumbersome, but now because money involved people more interested
- Time of check time of use: Very common attack pattern
- Nature of being open source: Does two things - everything is open, anyone can see, which also means everyone assumes someone else has seen it, so no one actually looks at it
- Cover: Can actually have own code in there in plain sight, no one looking at it because everyone assuming 99% of other people would have looked at it
- Attacks/scams: Also happening
- Interesting space: To keep eye on - not going anywhere, going to become bigger and bigger, too much money already happening
Web2 Attack Patterns:
- Web2 not going anywhere: Anytime soon
- Broken access control: Always going to be there
- Single page applications: More people moving towards - idea that will not leverage server side language, rather rely on API
- Serverless: Will not use server, will use serverless
- Problem: Serverless does not mean there is no server - there is someone some server running handled by someone else
- API based approach: Not going to make flaws in code, but flaw may exist in API connecting to code
- Problems still going to be there: Just have to look at it from slightly different angle
- Single page applications: More JavaScript driven - may not even have lot of back and forth GET and POST requests, everything happening over WebSockets
- Real-time communication: Interesting space to keep tab on
- WebSockets: Lot more apps utilizing now
- Earlier stages: Didn’t have proper tools to deal with them to test
- Companies implement encryption: In very bad ways to add additional layer of difficulty for tester
- Same world problems: Just being new dress coming up again
Desktop/AD/Azure:
- AD bugs: Lot more bugs come out in different systems - recent was ADCS one, new one Golden managed service accounts
- Same concept: Which was there before when used to attack AD, but now identifying same issues in different components which make AD ecosystem
- Still seeing: In Active Directory, in Azure - same thing
- Especially with cloud: See certain things coming up again and again
AppSec vs NetSec:
- NetSec slowly becoming absent: Let me elaborate
- AppSec definition: Application security - dealing with web applications, desktop applications, applications in general
- NetSec definition: Network layer components - routers, switches, protocols
- Modern environments: Being built over cloud
- Environment deployments: Happening over automated setups where network components effectively configuration file
- Infrastructure as Code: Config as Code setup - don’t actually deal with device themselves, deal with software component, configure it, device works way wants to work
- 5G network: If information right, even 5G network effectively moving towards software as service or software stack based architecture
- Software stacks more open: Earlier all protocols used to follow proprietary transmission protocols, proprietary packet structure - now moving towards more HTTP, FTP, HTTP, TCP kind of structures
- Allows everyone: To use existing appsec kind of skills over network security area also
- Conventional infrastructure: Not going anywhere - there, just less and less in use, more going in virtual direction
- Network security components: Learning about protocols, system internals, debugging protocol/packet, dealing with it from that angle - still relevant, still something people should be aiming for
- Specialization required: Going to be more demanding - basically means less and less vacancies, less and less people required in job market
- If want to make money: If want to be in hip market, AppSec is way to go
- If already good with NetSec: Established record - better to be that one person among millions who actually knows how to deal with specific device, then get paid
- Right now: Throw stone, find web application hacker
- Look at continent: Search continent, find maybe two or three mainframe hackers
- Skill set: Take it to level where among only handful of people who can deal with it
Advice for New Web Application Security Testers:
- Don’t aim for big players: At start
- Don’t aim for bounty: That has just appeared on bug bounty platform - those places have so many people swimming around, would not find something meaningful very quickly, would end up finding duplicates
- Finding duplicates is good: If want to keep finding in newer bounties, find stuff, report them, if duplicate that basically means found issue, just not first person finding it - still win because in right direction
- Suggest looking at bounties: Where there is no money involved - in BugCrowd, HackerOne, these sort of spaces there are bounties which are not bounties in terms of money but going to give point bounties or t-shirts
- Professionals doing bug bounties for money: Will conveniently avoid these bounty spaces
- Less competition: In there - aim for that to begin with
- Not going to get anything: But going to get confidence that “yes I am finding something”
- More find duplicates: That’s also fine - duplicate means in right direction
- Once figured out: This is where suggest everyone looking at becoming professional in terms of getting job in infosec space
- Not focus on one specific bug type: Rather explore as many bug types as possible
- If trying make career in bug bounty: If trying make money via bug bounty, need to dig deep into one specific issue, forget about everything else
- Example: XSS generally paid for like 50 or 100 dollars a piece, SQL injection or code execution paid maybe 10,000-20,000 dollars
- Don’t need find one beautiful code execution: If can find 200 XSS, going to end up making same amount of money
- Difference: If keep digging into one specific area, specialized in that area, will have demand for that specific flaw, but outside that specific flaw realm don’t know much
- If want be part of corporate structure: Not going to hire for one specific flaw unless that one specific flaw very very unique
- Finding people who can find XSS: Very common
- Finding people who can find LDAP injection: Or finding people who can find business logic flaw where money transaction happening on multi-stage environment - going to be very tricky
- If one of those: With specialized capability, will be hired for specialized capability, people fine paying for just finding that because worth money
- If just finding XSS: People might not be interested in hiring because automated scanner can find it, other people can also find it - value goes down
- Depending on how want to play game: Bug bounty system where earning money via bugs - just go deep dive into XSS, find as many XSS as possible (easy, replicatable, more comfortable with, relatively easy because already know about it)
- On web app testing side: On finding bug bounty side - aim for one area and dig deep into it if want make more money
- Aim for multiple bug types: If in for getting into corporate environment via this credential
- Real world applications different: Only way to get that kind of exposure is to have more and more tests under belt
- More number of applications seen: More number of websites seen, more ideas would have about what can go wrong in them
Prashant’s Additional Advice:
- Try to learn: What and how exactly does the attack work
- Be able to identify: If those specific components are exactly being used in real world application, and if those conditions are met then would it be vulnerable
- That’s what generally do: In recon phase
- Trying to identify: How or why application responds in way it does for certain payloads
- That sort of helps: Identify different vulnerabilities in application
- Just getting list of payloads: From PayloadsAllTheThings and just firing them without knowing why or how application should behave - not going to learn how to find them in real applications
Key Insights:
- Security then (2000-2010): Hacking was fun, exciting, easier to find vulnerabilities
- Security now (2010-2022): Established job opportunity, commercial enterprise, nine-to-five for many
- Entry paths: Earlier from sysadmin/other role, now can start directly from college
- Tools: Scanners evolved but still aids, manual testing always needed for business logic
- Bad actors: Much more organized, structured, bigger incentives
- Defense: Evolving because offense evolved, logging solved but monitoring/context still problem
- Broken access control: Always there, balance between security and convenience, easy to screw up
- Open source: From big no-no to 80% of code, cascading dependencies, SBOM just start
- OS: Does not define hacker, modern OSes interlinked, don’t get stuck on one
- Research: More difficult now due to overabundance of content, look at what people not focusing on
- Future: Web3 interesting space, Web2 not going anywhere, NetSec slowly becoming absent, AppSec way to go
Actionable Takeaways:
- Security then was fun/exciting, now is commercial enterprise
- Entry paths changed - can start directly but still tough for freshers
- Scanners evolved but still aids - manual always needed for business logic
- Bad actors much more organized - bigger incentives
- Defense evolving because offense evolved
- Logging solved but monitoring/context still problem
- Broken access control always there - balance between security and convenience
- Open source from big no-no to 80% of code
- SBOM just start - not solution, just knowing what you have
- OS does not define hacker - don’t get stuck on one
- Research more difficult now - look at what people not focusing on
- Web3 interesting space but rediscovering old bugs
- NetSec slowly becoming absent - AppSec way to go
- For bug bounty: Dig deep into one area if want money, explore multiple if want corporate job
- Learn what and how attack works - don’t just fire payloads