It???s no secret that the rapid speed of modern software development means an increased likelihood of risky flaws and vulnerabilities in your code. Developers are working fast to hit tight deadlines and create innovative applications, but without the right security solutions integrated into your processes, it???s easy to hit security roadblocks or let flaws slip through the cracks.
We recently dug through the ESG survey report,ﾂ?Modern Application Development Security, which uncovers some interesting data about the state of DevOps integration in the modern software development process. As the report states, DevOps integration is critical for improving your organization???s application security (AppSec) program, as automating and integrating solutions removes some of the manual work that can slow teams down and moves security testing into critical parts of the development process.
???DevOps integration reduces friction and shifts security further left, helping organizations identify security issues sooner,??? the report says. ???While developer education and improved tools and processes will no doubt also improve programs, automation is central to modern application development practices.???
According to the survey results, nearly half of organizations agree; 43 percent believe that DevOps integration is the most important piece of the puzzle for improving their AppSec programs. The report also outlines 10 elements of the most successful AppSec programs, and topping that list is ensuring that your AppSec controls are highly integrated into the CI/CD toolchain.
For some survey respondents, that???s easier said than done. Nearly a quarter (23 percent) said that one of their top challenges with current AppSec testing solutions is that they have poor integration with existing development and DevOps tools, while 26 percent said they experience difficulty with ??? or lack of ??? integration between different AppSec vendor tools.
AppSec tool proliferation is a problem too, with a sizeable 72 percent of organizations using more than 10 tools to test the security of their code. ???Many organizations are employing so many tools that they are struggling to integrate and manage them. This all too often results in a reduction in the effectiveness of the program and directs an inordinate amount of resources to managing tools,??? they explain further.
So where should organizations like yours start? By selecting a vendor with a comprehensive offering of security solutions that integrate to help you cover those bases and consolidate solutions while reducing complexity. That???s where Veracode shines. We bring the security tests and training tools you need together into one suite so that you can consolidate and keep innovating ??? securely. And your organization can scale at a lower cost, too: our range of integrations and Veracode solutions are delivered through the cloud for less downtime and more efficiency.
We aim to simplify your AppSec program by combining five key analysis types in one solution, all integrated into your development process. From ???my code,??? to ???our code,??? to ???production code,??? we have you covered with Static Analysis (SAST), Dynamic Analysis (DAST), Software Composition Analysis (SCA), Interactive AppSec Testing (IAST), and Manual Penetration Testing (MPT).
Automating SAST, DAST, and SCA in the pipeline means that you can incorporate testing without needing to wait for your security team to step in, fixing flaws the moment you spot them to keep projects moving forward quickly. In fact, by building and integrating security testing into their CI/CD pipeline, we know that some development teams have reduced their median time to remediation (MTTR) by a whopping 90 percent, driving down risk and freeing up valuable time.
Want to learn more about integrating AppSec into the development process? Check out this short demo video of Veracode Static Analysis.
Many organizations operating in e-commerce, hospitality, healthcare, managed services, and other service industries rely on web applications. And buried within the application logs may be the potential discovery of fraudulent use and/or compromise! But, let's face it, finding evil in application logs can be difficult and overwhelming for a few reasons, including:
- The wide variety of web applications with unique functionality
- The lack of a standard logging format
- Logging formats that were designed for troubleshooting application issues and not security investigations
- The need for a centralized log analysis solution or SIEM to process and investigate a large amount of application log data
So, in this blog post, we discuss threat modeling concepts that can help prioritize logging decisions and unleash the ability to identify and investigate attacks against an application. To help us demonstrate, we'll describe situations for a fictitious organization called Dog and Feline Urgent Response, or DFUR, that we presented at the 2020 SANS Digital Forensics & Incident Response (DFIR) Summit.
We selected Splunk Enterprise Security (ES) as DFUR’s SIEM and logging analysis platform, but this is just one option and there are multiple technologies that can facilitate application log analysis. We created a Splunk application called “Dog and Feline Urgent Response (DFUR)” available on the FireEye GitHub that contains pre-indexed data and dashboards that you can use to follow along with the following attack scenarios.
But, enough kitten around. Let’s introduce you to DFUR!
DFUR: Dog and Feline Urgent Response
DFUR is a long-standing organization in the pet wellness industry that provides care providers, pet owners, and insurance providers with application services.
- Care providers, such as veterinarians, use DFUR to process patient records, submit prescriptions, and order additional care services
- Pet owners use DFUR to make appointments, pay bills, and see diagnostic test results
- Insurance providers use DFUR to receive and pay claims to pet care providers
Application users log into a web portal that forwards logon and user transaction logs to DFUR’s Splunk ES instance. Backend databases store metadata for users, such as street addresses and contact information.
DFUR Security Team Threat Modeling
After stumbling through several incidents, the DFUR security team realized that their application did not log the information needed to answer investigative question clearly and quickly. The team held workshops with technical stakeholders to develop a threat model and improve their application security strategy. They addressed questions, such as:
- What types of threats does DFUR face based on industry trends?
- What impact could those threats have?
- How could the DFUR application be attacked or abused?
- What log data would DFUR need to prove an attack or fraud happened?
The DFUR team compiled the stakeholder feedback and developed a threat profile to identify and prioritize high-risk threats facing the DFUR application platform, including:
- Account takeover and abuse
- Password attacks (e.g., credential stuffing)
- Bank account modifications
- PHI/PII access
- Health service modifications or interruptions
- Fraudulent reimbursement claim submission
- Veterinarians over-prescribing catnip
The DFUR security team discussed how they could identify threats using their currently available logs, and, well, the findings were not purr-ty.
Logging Problems Identified
The DFUR team used their threat model to determine what log sources were relevant to their security mission, and then they dug into each one to confirm the log events were valid, normalized, and accessible. This effort produced a list of high-priority logging issues that needed to be addressed before the security team could move forward with developing methods for detection and analysis:
- Local logs were not forwarded to their Splunk ES instance. Only a limited subset of logging was forwarded to their Splunk ES instance, so DFUR analysts couldn't search for the actions performed by users who were authenticated to the application portal.
- Inaccurate field mapping. DFUR analysts identified extracted field values that were mapped to incorrect field names. One example was the user-agent in authentication log events had been extracted as the username field.
- Application updates sometimes affected Splunk ingestion and parsing. DFUR analysts identified servers that didn't have a startup script to ensure log forwarding was enabled upon system reboot. Application updates changed the logging output format which broke field extractions. DFUR analysts didn't have a way to determine when log sources weren't operating as expected.
- Time zone misconfigurations. DFUR analysts determined their log sources had multiple time zone configurations which made correlation difficult.
- The log archival settings needed to be modified. DFUR analysts needed to configure their Splunk ES instance data retirement policy to maintain indexed data for a longer time period and archive historical data for quick restoration.
- Source IP addresses of users logging into the portal were masked by a load balancer. The DFUR analysts realized that the source IP address for every user logon was a load balancer, which made attribution even more difficult. The X-Forwarded-For (XFF) field in their appliances needed to be enabled.
Analysis Problems Identified
The DFUR infosec team reviewed how previous incidents involving the DFUR application were handled. They quickly learned that they needed to solve the following operational issues before they could effectively investigate application attacks:
- Inconsistency during manual analysis. DFUR analysts took different approaches to searching their Splunk ES instance, and they would reach different conclusions. Playbooks were needed to define a standard investigative methodology for common incident scenarios.
- No documentation of log fields or sources. Some DFUR analysts were not aware of all relevant data sources that were available when investigating security incidents. This led to findings that were based on a small part of the picture. A data dictionary was needed that defines the log sources and fields in the DFUR Splunk ES instance and the retention time for each log source.
- Application logs were designed for troubleshooting, not investigating. The DFUR application was configured to log diagnostic information, application errors, and limited subsets of successful user activity. The DFUR team needed to reconfigure and develop the application to record more security related events.
DFUR: New and Improved Monitoring and Detection
The DFUR team addressed their application log and analysis problems and started building a detection and investigative capability in their Splunk ES instance. Using the analysis workflows developed during the threat modeling process, the DFUR team designed Splunk dashboards (Figure 1) to provide detection analytics and context around three primary datapoints: usernames, IP addresses, and care providers (“organizations”).
Figure 1: DFUR monitoring and detection dashboard
The DFUR team created the Splunk dashboards using Simple XML to quickly identify alerts and pivot among the primary datapoints, as seen in Figure 2. The DFUR team knew that their improved and streamlined methodology would save time compared to exporting, analyzing, and correlating raw logs manually.
Figure 2: Pivoting concepts used to develop DFUR dashboards
Newly armed (legged?) with a monitoring and detection capability, the DFUR team was ready to find evil!
Attack Scenario #1: Account Takeover
The next morning, the DFUR security team was notified by their customer service team of a veterinarian provider with the username ‘labradorable’ who hadn’t received their daily claims payment and noticed their banking information in the DFUR portal was changed overnight.
A DFUR analyst opened the User Activity Enrichment dashboard (Figure 3) and searched for the username to see recent actions performed by the account.
Figure 3: User Activity Enrichment dashboard
The analyst reviewed the Remote Access Analytics in the dashboard and identified the following anomalies (Figure 4):
- The username reminder and password reset action was performed the day before from an Indonesia-based IP address
- The user account was logged in from the same suspicious IP address shortly after
- The legitimate user always logs in from California, so the Indonesia source IP login activity was highly suspicious
Figure 4: Remote access analytics based on user activity
The DFUR analyst clicked on the Application Activity tab in the User Activity Enrichment dashboard to see what actions were performed by the user while they were logged in from the suspicious IP address. The analyst identified the user account logged in from the suspicious IP address and performed an email address change and added two (2) new bank accounts, as seen in Figure 5.
Figure 5: Application activity timeline filtered based on IP address
The DFUR analyst confirmed that the two (2) bank accounts were added by the user to the care provider with organization ID 754354, as seen in Figure 6.
Figure 6: Bank accounts added and assigned to a provider
By clicking on the organization ID in the Splunk results table, the DFUR analyst triggered a drill-down action to automatically open the Organization Enrichment Dashboard and populate the organization ID value with the results from the previous panel (Figure 7). The DFUR analyst determined that the bank routing information for the new bank accounts was inconsistent with the organization’s mailing address.
Figure 7: Organization Enrichment Dashboard
The activity indicated that the attacker had access to the user’s primary email and successfully reset the DFUR account password. The DFUR analyst confirmed that no other accounts were targeted by the suspicious IP address (Figure 8).
Figure 8: IP Address Enrichment dashboard
Attack Scenario #2: Credential Stuffing
Later that afternoon, the DFUR team began receiving reports of account lockouts in the patient and provider portals when users tried to login. The security team was asked to investigate potential password attack activity on their DFUR platform.
The DFUR analyst pulled up the main monitoring and detection dashboard and scrolled down to the panel focused on identifying potential password attack activity (Figure 9). They identified five (5) IP addresses associated with an elevated number of failed login attempts, suggesting a password spray or credential stuffing attack with varying success.
Figure 9: Dashboard panel showing potential password attack events
The DFUR analyst clicked on one of the IP addresses which triggered a drill-down action to open the IP Address Enrichment dashboard and prepopulate the IP address token value (Figure 10).
Figure 10: IP Address Enrichment dashboard
The DFUR analyst identified more than 3,000 failed login attempts associated with the IP address with three (3) successful logins that morning. The Remote Access Analytics panels for the IP address further showed successful logins for accounts that may have been successfully compromised and need to be reset (Figure 11).
Figure 11: Remote access analytics for IP address
After implementing the newly developed logs and analysis capabilities and by leveraging Splunk’s security solutions, the DFUR security team drastically improved key metrics aligned with their application security missions:
- Identify compromise and fraud before customers report it
- Analyze 90% of application security events within 30 minutes
- Answer all investigation questions from users, compliance, and legal teams
Mandiant and the whole DFUR security team hope you can use the scenarios and references in this post to improve your log analysis and how you leverage a SIEM solution in the following ways:
- Reflect on your current logging gaps and capabilities to improve
- Enhance logs from “whatever the developers implemented” to “designed to be investigated”
- Develop investigative workflows that are reliable and repeatable
- Correlate pivot points between your data sources and streamline correlation capabilities
- Create monitoring and alerting capabilities based on threat modeling
- Lower the technical barrier for comprehensive analysis
- Implement similar analysis capabilities to those in the “DFUR” Splunk application, linked in the References section
- Understand that logs can lead into better security analytics and strengthening of your security operations
For organizations that utilize Splunk security solutions as their SIEM solution, for automation, analytics or log aggregation, or want to try out for free with Splunk’s free trial download, we developed an application called “Dog and Feline Urgent Response (DFUR)” to demonstrate application log forensic analysis and dashboard pivoting concepts. The code contains pre-indexed data and CSV files referenced by searches contained in four Splunk XML dashboards. All data, such as IP addresses and usernames, was fabricated for the purposes of the demo and any association with organizations, users, or pets is coincidental.
Technology is constantly changing and advancing. Payment platforms are no exception. As these new platforms emerge, the software supporting the platform must be reliable and secure. Without secure payment platforms, payment transactions and data could be compromised.
The PCI Software Security Framework (SSF) sets standards and requirements for both traditional and modern payment software. The security standards, aimed at vendors, are in place to protect payment transactions and data, minimize vulnerabilities, and defend against cyberattacks.
To ensure that vendors are following the standards, Software Security Framework Assessors (SSF Assessors) perform evaluations of the payment software products against the Secure Software Lifecycle (Secure SLC) and Secure Software Standards. [The Secure SLC provides security requirements for payment software vendors to integrate security throughout the software development lifecycle. The Secure Software Standard provides security requirements for building secure payment software that protects the confidential data stored, processed, or transmitted using payment transactions.] Following the evaluations, the PCI Software Security Council (SSC) lists both Secure SLC Qualified Vendors and Validated Payment Software on the PCI SSC website for merchants to reference.
The SSF encompasses the same requirements as the Payment Application Data Security Standard (PA-DSS) ??? such as software development and lifecycle management principles for security in traditional payment software ??? but at a broader scale. SFF not only validates traditional payment software but also provides a methodology and approach for evaluating modern and future payment software. The methodology for new and future payment software encourages nimble developments, developer training and secure coding practices, and integration and automation of security into the software development lifecycle.
Since separate standards for PA-DSS are no longer necessary, the PCI SSC will retire PA-DSS at the end of October 2022. To help you prepare for the transition from PA-DSS to SSF, here are some need-to-know facts listed on the PCI webpage:
- Existing PA-DSS validated applications will remain on the List of Validated Payment Applications until their expiry dates. At the end of October 2022,ﾂ?PCI SSC will move PA-DSS validated payment applications to theﾂ????Acceptable Only for Pre-Existing Deployments??? tab.ﾂ?
- You can submit new payment applications for PA-DSS validation until June 30, 2021.
- PCI SSC now lists both Secure SLC Qualified Vendors and Validated Payment Software on the PCI SSC website.
- PCI will recognize payment software that meets the Secure Software Standard on the PCI SSC List of Validated Payment Software, which will supersede the current List of Validated Payment Applications at the end of October 2022.
If you are a PA-DSS validated vendor ??? or not yet validated by PCI ??? and need help meeting the new SSF requirements, Veracode can help. A good place to start is our three-tiered Veracode Verified??｢ program, which offers a proven roadmap to a mature and comprehensive AppSec program and includes many elements required for compliance with security regulations, including PCI SSF.
Check out our Veracode Verified webpage to learn more about the program.
It is great that your organization takes securing payment data seriously. Now is the time to take the next step forward and make a difference by becoming a PCI SSC Participating Organization, (PO). POs play a key role in both influencing the ongoing development of PCI Security Standards and programs, and in helping ensure that PCI Security Standards are implemented globally to secure payment data.