We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
A new report by Logz.io analyzes key trends and challenges experienced by developers every day. As we continue to watch cloud and observability sectors mature, the complexity of environments and speed of incident response remain big challenges.
One of the most interesting, yet troubling, findings of the research is that 64% of respondents report over an hour mean time to recovery (MTTR), compared to 47% reported in last year’s report. What’s more, 53.4% of people surveyed last year claimed to have resolved production issues within an hour on average – this year, that number dropped to 35.94%.
Another data point from the report reveals that application and data security have moved to the forefront of devops teams’ priorities. Ranking as the fourth overall concern among respondents at 33%, data security was identified as one of the survey’s primary observability challenges. From expanding their role in security based on an increasing emphasis on the cloud, to managing various tools, devops teams are increasingly concerned with security issues.
While the majority of today’s devops practitioners report that their cloud and observability efforts mature quickly, challenges around the monitoring of complex microservices and efforts to speed incident response continue to pose sizable hurdles. According to the research, observability tooling and practices continue to escalate while challenges arise — such as monitor tracing, developing visibility into Kubernetes, microservices, serverless and cloud native architecture.
By closely tracking and analyzing data that is central to core observability requirements and reducing MTTR despite identified challenges, organizations can better calculate associated spending and ROI. Emphasizing these factors combined with an increased focus on application and data security solve the challenges identified by devops teams and observability practitioners.
The report reveals that there is too much data and the current model for observability is broken. Organizations are becoming more concerned about the impact of data volumes on production quality and cost. This report offers an analysis of the evolving landscape and calls on organizations to think carefully about the impact of Kubernetes and microservices and constantly evaluate telemetry data value and hygiene.
As AI is integrated into day-to-day lives, justifiable concerns over its fairness, power, and effects on privacy, speech, and autonomy grow. Join this VB Live event for an in-depth look at why ethical AI is essential, and how we can ensure our AI future is a just one.
“AI is only biased because humans are biased. And there are lots of different types of bias and studies around that,” says Daniela Braga, Founder and CEO of Defined.ai. “All of our human biases are transported into the way we build AI. So how do we work around preventing AI from having bias?”
A big factor, for both the private and public sectors, is lack of diversity on data science teams — but that’s still a difficult ask. Right now, the tech industry is notoriously white and male-dominated, and that doesn’t look like it will change any time soon. Only one in five graduates of computer science programs are women; the number of underrepresented minorities are even lower.
The second problem is the bias baked into the data, which then fuels biased algorithms. Braga points to the Google search issue from not so long ago, where searches for terms like “school boy” turned up neutral results, while searches for terms like “school girl” were sexualized. And the problem was gaps in the data, which was compiled by male researchers who didn’t recognize their own internal biases.
For voice assistants, the problem has long been the assistant not being able to recognize non-white dialects and accents, whether they were Black speakers or native Spanish speakers. Datasets need to be constructed accounting for gaps like these by researchers who recognize where the blind spots lay, so that models built on that data don’t amplify these gaps with their outputs.
The problem might not sound urgent, but when companies fail to put guardrails around their AI and machine learning models, it hurts their brand, Braga says. Failure to root out bias, or a data privacy breach, is a big hit to a company’s reputation, which translates to a big hit to the bottom line.
“The brand impact of leaks, exposure through the media, the bad reputation of the brand, suspicion around the brand, all have a huge impact,” she says. “Savvy companies need to do a very thorough audit of their data to ensure they’re fully compliant and always updating.”
How companies can combat bias
The primary goal should be building a team with diverse backgrounds and identities.
“Looking beyond your own bias is a hard thing to do,” Braga says. “Bias is so ingrained that people don’t notice that they have it. Only with different perspectives can you get there.”
You should design your datasets to be representative from the outset or to specifically target gaps as they become known. Further, you should be testing your models constantly after ingesting new data and retraining, keeping track of builds so that if there’s a problem, identifying which build of the model in which the issue was introduced is easy and efficient. Another important goal is transparency, especially with customers, about how you’re using AI and how you’ve designed the models you’re using. This helps establish trust, and establishes a stronger reputation for honesty.
Getting a handle on ethical AI
Braga’s number-one piece of advice to a business or tech leader who needs to wrap their head around the practical applications of ethical and responsible AI is to ensure you fully understand the technology.
“Everyone who wasn’t born in tech needs to get an education in AI,” she says. “Education doesn’t mean to go get a PhD in AI — it’s as simple as bringing in an advisor or hiring a team of data scientists that can start building small, quick wins that impact your organization, and understanding that.”
It doesn’t take that much to make a huge impact on cost and automation with strategies that are tailored to your business, but you need to know enough about AI to ensure that you’re ready to handle any ethical or accountability issues that may arise.
“Responsible AI means creating AI systems that are unbiased, that are transparent, that treat data securely and privately,” she says. “It’s on the company to build systems in the right and fair way.”
For an in-depth discussion of ethical AI practices, how companies can get ahead of impending government compliance issues, why ethical AI makes business sense, and more, don’t miss this VB On-Demand event!
Join today’s leading executives online at the Data Summit on March 9th. Register here.
A new report from Salt Labs, the research division of Salt Security, found that Salt Security customers experienced a 681% increase in API attack traffic over the past year while their overall API traffic grew 321%. This steep rise in malicious API security calls is causing delayed production rollouts and a lack of confidence in API security strategies, ultimately harming business innovation.
2021 saw a significant rise in API security incidents as organizations continued to transform their ways of working and as developers built more applications and APIs for an ever-growing number of services. Attackers also changed their tactics to target APIs more frequently. As a result, 95% of survey respondents reported having suffered an API security incident in the past 12 months.
Despite these security incidents, the average number of APIs in use per customer increased 221% over the last 12 months, growing from 42 in December 2020 to 135 in December 2021. Taking the 221% increase in APIs with the 321% growth in overall API call volume, Salt Security customers are using their APIs far more frequently. Twenty-six percent of survey respondents reported they use at least twice the number of APIs as a year ago and 5% using more than triple the APIs. However, API security concerns continue to impede innovation with 62% of respondents delayed deploying applications into production because of API security concerns. Organizations face an urgent need to reduce the risk around APIs to continue to innovate quickly and support business growth.
Accordingly, stopping API attacks remains the #1 security priority for surveyed enterprises for the third time in a row (42%). There was additional upside in the results of this edition of the report as well — API security is universally changing how security teams work for the better. More than a third of respondents (34%) reported that security is collaborating more with devops, and another 30% cited that devops is seeking input from security teams to shape API guidelines. An additional quarter of respondents (25%) have security engineers getting embedded with devops teams, which is driving real progress toward DevSecOps adoption.
The report drew on a mix of survey results and anonymized data, including responses from more than 250 security, application and devops executives and professionals, and aggregated empirical customer data from the Salt Security API Protection Platform.
A new report from Palo Alto Networks found that the COVID-19 pandemic affected cloud adoption strategies for nearly every organization over the past year. Data from the report showed that businesses moved quickly to respond to increased cloud demands: nearly 70% of organizations are now hosting more than half of their workloads in the cloud, and overall cloud adoption has grown by 25% in the past year.
That said, the struggle to automate security was palpable, and no matter the reason an organization moves workloads to the cloud, security remains consistently challenging. Respondents noted that the top three challenges in moving to the cloud were maintaining comprehensive security, managing technical complexity, and meeting compliance requirements.
Furthermore, Palo Alto Networks’ analysis found that “successful” transformations are more likely when an organization has a cohesive strategy for moving to the cloud — a driving factor behind the program. And, organizations that embrace security and automation as part of that cloud adoption strategy show a better number of better business outcomes.
Case in point: 80% of organizations with strong cloud security posture reported increased workforce productivity, and 85% of those with low “friction” between security and development (DevOps) teams report the same. More specifically, organizations that tightly integrate DevSecOps principles are over seven times more likely to have a very strong security posture. This is independent of industry, budget, country, or other demographic categories.
Other findings in the report include key differences in the ways organizations are allocating budget for cloud and cloud security; the organizational practices that differentiate teams with a strong cloud security posture from those with a weak security posture; and the common strategies successful organizations share in achieving secure cloud transformations.
For its report, Palo Alto Networks surveyed 3,000 global professionals working in cloud architecture, InfoSec, and DevOps across five countries to understand the practices, tools and technologies that companies are using to secure and manage cloud native architectures.
Incorporating zero-trust principles into modern data security ensures no one point of failure when systems are breached. Zero-trust principles can ensure that even if attackers know the database location/IP, username, and password, they cannot use that information to access privileged information given to specific application roles, identity and access management (IAM), and cloud-network perimeters.
Today, we live in a hybrid-cloud environment where users, developers, supply-chain vendors, and contractors get data via a web of static infrastructure and cloud applications. Legacy control solutions for this data rely on internal developers’ IAM rules and authorization policies for customer-facing web services.
According to the report, a zero-trust architecture is expected to increase cybersecurity protections’ efficacy to stop data breaches by 144%. The report also credits an emphasis on securing customer data as another motivator behind enterprise-wide deployment.
Other key highlights from respondents include barriers encountered when deploying a zero-trust architecture, their confidence level in existing cybersecurity protections, the top ten data sources in need of protection, and the total IT budget allocated for zero-trust initiatives by year.
This report references data from an in-depth survey of 125 IT and security decision-makers in midsize and large organizations, all of whom are knowledgeable about how their organization was using or planning to use a zero-trust architecture, or why their organization had intentionally chosen not to do so.
Read the full report by Symmetry Systems and Osterman Research.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
up-to-date information on the subjects of interest to you