Buguardian Blog

Check out our latest product updates, use cases and business insights from Buguardian and the community.

Your Application Security Program May Not Work, And Here Are 3 Reasons Why.
One of the main factors that prevent organizations from taking action on application security is, first of all, they do not know where to start. However, even if the starting point is determined, even if the basic criteria are determined and a strategy is created, this does not mean that the security strategy will be successful. There are a number of common dangers that companies often overlook when building their security. Here are the 3 most common pitfalls to watch out for and take precautions if you want success with your application security: 1 — Lack of policy enforcement 2 — Lack of expertise on how to reduce risk 3 — Failure to create a culture of security Let’s talk a little bit about these three and get to know better these overlooked dangers that threaten companies. Lack Of Policy Enforcement IT teams can create strong application security policies to reduce the number of vulnerabilities in applications builted and purchased by the organization. However, if these policies are ignored or if teamworkers’ workarounds to problems become useless, they become useless in practice. As well as application security principles, raising awareness of why policies are important and how they should be followed is vital. The most important thing is to establish concrete mechanisms that will ensure the implementation of policies. Lack Of Expertise On How To Reduce Risk Application security is quite different from other forms of IT security. Creating a security program requires a lot of planning, coordination and synchronization between teams. It is also very important to employ staff who are knowledgeable about risk reduction strategies, application development process and programming techniques. Therefore, these also require security professionals to be highly knowledgeable in their fields. It is only possible for the programs of organizations that work with competent security experts to be successful. Without partnering with these experts, the difficulty of making the right measurements and setting the right targets, as well as not being able to analyze the starting point and not knowing how to move forward, will be like trying to hit a target in the dark. As you can imagine, hitting the target will be very unlikely in this case. Failure To Create A Culture Of Security Making security a part of the organization’s culture is a crucial step in ensuring that application security policies are followed properly. Many organizations work long hours to build strong and practical application security, but experience failure as the rest of the organization underestimates and prioritizes this work. This is why it becomes so essential in effective security that all employees make security a rule and see it as a fundamental component of the organization’s existence. Creating a security culture starts with ensuring that all employees understand how security affects the entire organization and why it is every employee’s responsibility to adhere to security principles. Advanced application security programs require employees to be more cautious than they normally would, and therefore change many routinized actions. Without understanding the value of security, developers, software purchasers, and others in the organization will always look for ways to circumvent policies to make their job easier. After all the effort put into building an application security program and incorporating it into business processes, the lack of understanding and poor policy enforcement to get that effort out of the way is really the last thing you want. However, this is more common than you might think. Making sure you have realistic policy implementations, working with people who are experts in mitigating risk, and creating an inclusive safety culture for everyone in your organization will take great strides to ensure the program achieves its goals.
The future of cyber security: What happened, what is expected?
Due to the Covid-19 pandemic, organizations in almost all industries have stepped up their digital transformation efforts to make online transactions easier for employees and customers. But the digital attack surface is growing at a record pace as organizations that conduct online transactions increase. In fact, research says that 76 percent of web applications have at least one vulnerability. There are 3 key technology trends that are believed to impact cybersecurity in the next few years. The first one is “ubiquitous connectivity”. We all know how quickly almost everyone and everything in the world has become interconnected. Which of our ancestors could have predicted that the day would come when we would turn on your coffee machine with a simple voice command? Not many, 50–60 years ago, these developments, which could be considered fantastic for the people at that time, have now become reality. At the end of 2019, the number of active IoT devices was around 7.6 billion, this number is expected to increase to 24.1 billion by 2030. Well, with these, businesses are also shifting their apps to the cloud. However, IoT devices and cloud-based software also invite attack risks. According to Verizon 2021 DBIR, web apps are a 39% source of breaches, which is double the amount in 2019. The reason for this is the increased web application surface and the sudden transition to cloud-based operations due to the pandemic. Wireless and 5G connectivity also contribute to the proliferation of attacks. Think of online shoppers, have smartphones and check their e-mail WITHOUT a firewall. The interfaces where all these operations are performed are based on APIs. When the right security steps don’t be followed, APIs become one of the most attractive targets for cybercriminals. The second trend is “abstraction and componentization”. How fast do companies roll out new software and technologies, right? It’s almost like you’re seeing a new software update every time you look at your iPhone. But the speed of software deployments is no longer surprising or unexpected. The faster companies launch in competition with each other, the higher they move. Many development teams are not using only cloud to move faster, they are also turning to microservices. Through microservices, development teams break up large applications into smaller, reusable logic blocks that are much easier to work on. Open source libraries are also used to speed up development. According to the State Of Software Security report, 97% of Java applications are written with open source libraries. And 46 percent of unsafe open source libraries in applications are transitive, meaning that the library is indirectly linked to other open source libraries. This means that the attack surface is not only limited to the library your developer adds, but also includes other libraries that are implicitly pulled by your library. In the future, it is envisaged that there will be a trusted third party review authority that manages all public APIs to make software publishers responsible for audits independent of controls. There is also an “awareness component” here. Developers need to be aware of the risk both in the libraries they use and in the libraries they indirectly touch. Finally, we should mention the big role automation will play. For example, automating open source remediation in the future will be critical. The latest trend we know will affect cybersecurity is “hyperautomation of software delivery”. As we mentioned in the abstraction and componentization trend, delivery speed is crucial when it comes to getting ahead of competitors in the market. Speed will become even more important in a few years and will teach businesses “hypercompetitiveness”. It is something we look forward to in the future for businesses to automate as many processes as possible. Automating not only development processes, but also processes that interact with software delivery is in demand. Eventually, DevOps and pipeline automation will no longer be targets, they will be in the expectations list. And at the end of all these, everything that can be coded, will be coded. Security will be coded, compliance will be coded and infrastructure will be coded… This means that cyber security will become more and more automated every day. We will begin to see more and more organizations turning to DevSecOps. The security team will become less operational and take on more supervisory roles. Developers will be responsible for application security testing and automating scans for existing processes. We know that vendors are turning to AI and machine learning for tasks like identifying design vulnerabilities, threat modeling and remediation, but this behavior will only increase in the coming years. We also expect more and more vendors to offer automatic fix for third-party code. Finally, considering these three trends and the attack surface that has reached a more frightening size than before, we can observe an increase in cyber security regulations. Even government officials are now calling for increased security and more transparency against cyber incidents. We hope and expect the regulations to affect not only vendors interacting with the federal government, but also vendors serving the public. For now, we can only predict the future of cyber security with such inferences. What time will show and teach us, we will see together.
Who Is Behind This Hat? 4 Different Cyber Attack Profiles
As organizations begin to manage an ever-increasing number and breadth of databases, both on-premises and in the cloud, the areas where cyberattacks can occur have become larger and more difficult to deal with. Unfortunately, the malicious individuals (internal or external) are just as aware of this as we are. They’re constantly researching, testing, trying to defeat and destroy application and data security solutions. As our security measures change and evolve, their attack methods evolve and become more and more complex. Knowing the behavior patterns of cyber attackers allows us to understand the “Database Threat Landscape” much better. Threat landscape is the name given to all potential or actual cyber threats that affect a particular industry, user group or environment. Understanding the threat landscape for databases will be very helpful in creating an effective implementation and data-centric security strategy. In this post, we will examine four main cyber attacker profiles, one internal and three external. So we’ll get an idea of ​​their methods and motivations and let you use what you know about them to prevent attacks. Cyber ​​attackers can be broadly divided into two categories: internal threats and external threats. Internal, the threat inside your organization, is created when data is accidentally or intentionally left vulnerable to attack. Internal attackers are often motivated by money, often accompanied by a “dislike”. An internal attacker may have a relatively simpler job than an external one, as they likely have access to assets and credentials and are far less suspicious than an external threat. Of course, there are some methods that organizations can implement to reduce the risk of internal attacks. The most obvious and easiest thing to do is to make sure that employees don’t do nonsense things like sharing their passwords with an insider, or worse, an outsider. Similarly, you should make sure that they log out properly when they complete their work in environments containing sensitive data. In the continuation of this, it is very important for security teams to carefully determine the permissions and privilege levels to access sensitive data and to monitor these accesses. In short, a user should not have access to any data that is useless and does not need to know. Most importantly, and indeed the root of all of this, user behavior should also be followed with watchful eyes. For example, if an internal user who has never accessed a sensitive data source, suddenly accesses and starts downloading a large amount of sensitive data then security teams should be automatically notified and inquiries should be made. The attackers we will describe in the following three profiles are external attackers and their motivations for attacks also vary. The Hit And Run Attacker: When attackers in this profile see a vulnerability, public database or similar, they do whatever they can and leave. They do not search other databases, try to break into the organization’s network, or exploit other exotic vulnerabilities. They just do what they can for the moment, take what they can get, and sell them to the highest bidder. The Curious Attacker: This profile usually has a predetermined goal, or even a roadmap, but they think it won’t hurt anyone to look around a little without too much suspicion. They are still attached to their root purpose. The Resident Attacker: The most dangerous profile. As in the case of the Equifax breach, which 143 million people’s information was stolen including their social security numbers, an attack by this type of attakers would penetrate the network and remain there for months, perhaps even years. They simultaneously use keyloggers, sniffers, and more to steal credentials and compromise databases. Many organizations, in particular, almost invite hit-and-run attackers with open arms. While most security teams do their best to prevent the exploitation of newly discovered security vulnerabilities, some DBAs and DevOps staff run operations and workloads publicly in the cloud where security teams do not reach or take account of them. If these data are left unsecured, they become targets that hit-and-run attackers can easily attack. If you are using service publicly, even just for search and analytics, you should make sure they are properly configured and updated. It is known that approximately 75% of the data stolen in security breach cases is personal data. While a hit-and-run attacker might want to steal data that can be momentarily and dynamically beneficial, such as a credit card number, a “curious attacker” can kill some time around, find and steal data that they can associate with each other and cause greater damage. In this sense, it is important to have solutions with strong malware detection/prevention capabilities in-house to make it harder to install and spread malware on end-user machines. The resident attacker plays his game step by step, strategically. The best way to reduce the risk these attackers pose to your data is to simply play this game with them. To deal with the Curious attacker, in addition to the tactics mentioned above, you should make sure that your privileged users’ passwords are changed frequently. Consider a zero-trust network to ensure robust data security controls. All of this is unfortunately only a part of the iceberg of risk reduction.
Are you ready for the Zombie API Attack?
The next-generation web application firewall Buguardian WAF has advanced machine learning technology to identify companies’ shadow APIs and mitigate security breaches through APIs. Buguardian WAF not only improves the security of your infrastructure but also makes life easier for web developers. APIs that enable communication between your web applications and other applications are an important segment of the security discipline. Some APIs live outside of IT governance and security processes. Because these APIs are invisible, security teams do not know what data and applications they can access. Because they are not documented, they pose significant security and governance risks to organizations. Shadow APIs allow hackers to steal valuable data or threaten enterprise applications. Zombie APIs are a subset of shadow APIs. People create shadow APIs for different needs. A developer in a hurry to finish a new project can write a shadow API for shortcut creation purposes to make his job more manageable. Sometimes, one or more managed APIs are copied and used to support other services, and API is ‘forgotten’ there as long as it works. End of the day, an attacker gains access to outdated API connections. Hackers are highly adept at finding unpatched old API versions. And the adventure begins. Hackers thus hijack various accounts. On the other hand, zombie APIs are a type of API that aims to damage the institution or data in its life cycle or is accidentally executed. High-security risks OWASP shadow APIs, which contain the top 10 security risks, have been identified in API9:2019 Improper Assets Management. These APIs pose numerous risks. For example, Facebook sends a 10-digit number string via mobile phone to authorize users who forget their password. The Facebook security team realized after a while that this would pose a threat. This time, the security team started to stop sending a password to the user after ten attempts to request a password, in case the hackers who constantly demand a password test the six-digit combination. However, this time another problem arose. Developers had copied the original API for some different web services in the past. The problems that arose were resolved over time. In short, it is always possible for shadow APIs to cause problems. For this reason, the use of APIs in companies requires careful management. Applications such as BUGuardian WAF can detect such problems early and ensure that security measures are taken. 100 million PII stolen Again, in the case of an Indian company, an attacker who discovered an old API working in the system changed the version number of the new API. With this method, it accessed the less secure legacy API and obtained personally identifiable information (PII) of more than 100 million people, such as usernames, emails, phone numbers, addresses, gender, date of birth, photos, and work history. Subsequent investigations revealed that the old API had been running quietly at this company for four years without being noticed. The next-generation web application firewall Buguardian WAF has advanced machine learning technology to identify companies’ shadow APIs and mitigate security breaches through APIs. Buguardian WAF not only improves the security of your infrastructure but also makes life easier for web developers. Buguardian WAF API automates the process of creating documents, creating an inventory of live APIs. It ensures that every change made is reflected in the documents. When a new API version is released, it provides a comprehensive analysis of the old API and documents its status -Is it updated, terminated, etc.? Ask Buguardian for more information to keep your web applications free from shadow and zombie API risk.
Financial services: The focus of web attacks
Willie Sutton has stolen nearly $2 million during his professional bank robbery career over the past century. Rumor has it that a journalist asked, “Why are you robbing banks?” In response to the question, he replied, “That’s where the money is”. Although Sutton denied this dialogue in the following years, he put his finger on a very correct point about the financial services sector. These days, stealing money from brick-and-mortar banks remains a very “old-fashioned” approach. Now the currency a cybercriminal will hunt for is personal information, and where they look for it is web applications where online financial transactions flow like water. Make no mistake, financial institutions are still “where the money is”. Financial services still hold the title of “the most exploited sector” with a rate of 35%. Also, the COVID-19 pandemic, which cybercriminals have benefited really well, has led to a significant growth in online banking, which of course has significantly increased the volume of sensitive personal data that can be stolen. Between January 2021 and May 2021 alone, attacks in the financial services sector increased by 38%. Here are the top 5 security threats targeting financial services… Deficiencies and errors in protecting sensitive data The rapid (but also imperative) rise of online banking and the digitization of financial services necessitate the management of more complex and larger volumes of customer data. With the expectation of stricter, tougher data privacy, sensitive data protection has become unprecedentedly difficult. The pace of change in this sector causes many security vulnerabilities to remain open and insufficient and comprehensive protection to be provided. It is obvious that cybercriminals are well aware of this, as attacks on sensitive data are increasing at an alarming rate. According to research, more than 870 million records were seized in January 2021 alone. This is more than the total number of compromised records seen in all of 2017. DDoS attacks DDoS attacks target the top layer of the OSI model, the application layer, which provides connectivity over the internet protocol. The goal is to bombard a server with too many requests and paralyze its traffic until it can no longer respond. The more requests per second (RPS), the more intense the attack. The Digital Banking Report stated that the first goal of financial service providers should be to improve the customer experience. Businesses that take quicker action on attacks that disrupt the customer experience have higher referral rates, but also achieve higher numbers in retaining and cross-selling existing customers. When customers’ access to online banking services is interrupted in some way, the reaction is often anger. This anger can cause complaints on social media platforms, customers to change providers and damage the brand name of banks. RDoS threats Towards the end of 2020, there was a significant increase in the amount of Ransom Denial of Service (RDoS) threats. Many of these target thousands of business entities worldwide, including financial services. RDoS attacks are “extortion-based” DDoS threats motivated by financial gain. The hijackers demand payment in bitcoins to stop the DDoS attack on the target. They usually make this request using the names of well-known threat actor groups. The pattern of these threats is very similar. First, the hijacker sends a threatening email, sometimes intimidating about the content of the attack. This is usually an email that will take the company offline for a short time. The aggrieved enterprise is given a period of 1 week to pay the requested money. The cybercriminal threatens to launch a larger and more unstoppable attack on the specified date in case of non-payment. This is how a ransomware attack usually happens. Client-Side Attacks Client-side attacks happen when a website user downloads malicious content, allowing the attacker to block user sessions, phishing and damage the website. In financial services, this attack exploits third-party scripts on thousands of websites and is carried out against payment information. Financial websites rely on third-party scripts to improve their services to their customers, but because of the digital transactions that process assets and data, these scripts also become clear and rich targets. For example, when credit card information is stolen, high amounts of purchases can be made in that minute or this information can be sold to other criminals for later use. Consumers and service providers may not realize this until it’s too late. Supply Chain Attacks Although we note that vulnerabilities and attack rates are increasing in software security, it is common knowledge that most software vulnerabilities are not reported. “Front-to-back processing for all financial services” brings together and integrates a complex set of software applications including back office, middle office, risk management, business developers, finance and IT. APIs are at the heart of these applications and allow them to communicate with each other. Unfortunately, while APIs work so well, they also contain information that can be used to attack the supply chain, such as applications and their internal structures. Side factors like weak authentication, lack of encryption, unprotected endpoints make APIs even more vulnerable. The attack surface, and therefore the risk, increases as financial services organizations expand their supply chain by engaging with other companies to obtain and deliver services. An inadequately protected supply chain makes your company a shiny target for attackers who are aware of vulnerabilities in APIs and will not hesitate to exploit them.
What Is A DDoS Attack?
Distributed Denial of Service Attack is a malicious attempt to disrupt the normal traffic of a server or network, by crushing the target or surrounding infrastructure with a massive traffic flow. We can compare a DDoS attack to an unexpected traffic jam that blocks the highway and disrupts the normal flow of traffic, preventing vehicles from reaching their destination. DDoS attacks use multiple compromised computer systems as a source — from which they can carry out their attacks, and thus provide effectiveness. It may include other network resources such as exploited machines, computers and IoT devices. How does a DDoS attack work? DDoS attacks are carried out by networks of machines connected to the internet. These networks consist of computers and other devices that are infected with malware and allow an attacker to infiltrate. These devices are called bots (or zombies), and a group of bots is called a botnet. After the bots get together and create the botnet, the attacker sends instructions to each of these bots remotely, directing the attack (just like pawns). When a victim’s server or network is targeted by a botnet, each bot sends countless and meaningless requests to the target’s IP address, potentially causing the server or network to become overloaded, blocking normal traffic from service. Moreover, since bots are legitimate internet entities, it can be difficult to distinguish attack traffic from regular traffic. Application Layer DDoS Attacks The vast majority of DDoS attacks occur at Layer 7, the application layer. In this layer, web pages are created on servers and are given in response to HTTP requests. The purpose of Layer 7 DDoS attacks is to consume the target’s resources to create a denial of service. It is difficult to defend against Layer 7 attacks, as it can be difficult to distinguish malicious traffic from legitimate traffic at this layer. One of the most popular DDoS attacks at Layer 7 is HTTP flooding. The behavior created by this attack on the web page is similar to pressing F5 constantly on many different computers at the same time, causing a denial of service by flooding the server with many HTTP requests. The type of attack ranges from simple to complex. Simpler attacks may be aimed at accessing a single URL with IP addresses and agents attacking in the same range. More complex versions can target many different URLs using multiple IP addresses, random referrers, and random agents. So… How can you detect a DDoS attack? The most obvious indicator of a DDoS attack is that the site or service suddenly slows down or become unavailable. But since this slowdown or being unavailable situation can be caused by other harmless conditions, so further investigation is required. In this situation, traffic analysis tools such as WAFs will help to understand the DDoS attack. WAF acts as a reverse proxy. It notifies, alerts, and protects your apps against suspicious amounts of traffic from a single IP address or IP range, floods of traffic from a single user profile, and strange traffic patterns.
WAF vs. IPS: What’s The Difference?
It would not be wrong to say that one of the most valuable assets of an organization is its data. Malicious individuals use different methods to capture, exploit, and access this sensitive data from companies’ vulnerabilities. Attacks on network protocols can occur at different layers of the network, forcing us to use different security mechanisms for each layer and attack type. Two of these different security mechanisms, WAF and IPS, are often compared to each other. Now let’s see the difference between them. Key Difference Firstly, IPS relies solely on signatures. It is not sensitive to sessions and users trying to access the web application, it does not examine them. But WAF is aware of sessions and users, constantly analyzing network traffic. Also, one of the most fundamental differences between the two technologies is that IPS operates at Layers 3 and 4, while WAF operates at Layer 7. All in all, WAF is extremely useful for protecting web applications and is often used to secure traffic between servers and users. It is aware of and have a command of the web traffic. On the other hand, IPS provides protection for different network protocols and can perform raw protocol decoding, detect anomalous behavior, but is unaware of sessions (GET/POST), users or even applications, and cannot take action by learning the behavior in these areas with machine learning.
Cloud vs. On-Prem WAF: What’s The Difference?
When it comes to web application firewalls, cloud WAF and on-premise WAF are the 2 most common WAF types on the market. On-premise WAFs require hardware tools in your physical server environment. Cloud WAFs, on the other hand, provide protection through the cloud computing system without the need for any hardware costs. Deployment On-premise WAFs take time to deploy and require specially trained people to configure them properly. It may take up to 1 week for the WAF to become fully operational. Cloud-based WAFs are easy and fast to set up, and everything can be protected in minutes. The area they provide protection is much wider. Management On-premise WAFs require your company to have its own dedicated IT team to manage operations, or may require hiring outsource IT employees. You can have full access and control with WAF, only when it’s embedded in the company infrastructure properly. Therefore, the management of WAF requires expertise. This creates additional training costs or the need to recruit staff. Cloud WAFs require very little maintenance. You can even benefit from the 24/7 service of your WAF provider. Real-time reports on web traffic activities are provided, allowing you to take action only when needed. Cloud WAF is also a suitable option for businesses with limited resources and time to dedicate to IT operations. Scalability On-premise WAFs have limited capacity. This means that if you want to increase the protection power of your WAF, you may need to purchase additional hardware. Instead, cloud WAFs provide much more flexibility in scalability. You can instantly increase the capacity of your WAF, or configure it to scale automatically -based on threats and traffic. Cost The cost for both types of WAFs differs greatly due to the different expenditures to be incurred. On-premise WAFs require larger investments in the long periods. It includes equipment, maintenance, hardware costs, upgrade costs, making on-premise WAF more costly. With cloud WAFs, payments are relatively more predictable and ofcourse more affordable, as there are annual or monthly subscriptions and extra fees for add-ons, if any. * Ultimately, the WAF you choose should be determined by your organization’s architecture and your specific needs. Generally, we can say that Cloud WAFs are more preferable because they provide flexibility, no hardware costs, and they are more comprehensive. Also, cloud is more flexible, you can mitigate most of the problems by simply adjusting the policies. But ofcourse, before determining the WAF to be used, the organizational structure and needs must be clearly determined and the most effective solution must be selected accordingly.