Extreme weather events will soon become more frequent and widespread, devastating areas of the world that typically don’t experience them and amplifying the destruction in areas that do. We have already seen devastating wildfires and an increase in hurricane activity this year in the United States. Uncovering shortcomings in technical and physical infrastructure, these events will cause significant disruption and damage to IT systems and assets. Data centers will be considerably impacted, with dependent organizations losing access to services and data, and Critical National Infrastructure (CNI) will be put at risk.
Extensive droughts will force governments to divert water traditionally used to cool data centers, resulting in unplanned outages. In coastal areas and river basins, catastrophic flooding, hurricanes, typhoons or monsoons will hit key infrastructure such as the electrical grid and telecommunication systems. Wildfires will lead to prolonged power outages, stretching continuity arrangements to breaking point. The impact of extreme weather events on local staff, who may be unwilling or unable to get to their workplace, will put operational capability in jeopardy. The magnitude of extreme weather events – and their prevalence in areas that have not previously been prone to them – will create havoc for organizations that have not prepared for their impact.
In addition to natural factors, environmental activists will establish a link between global warming and data center power consumption and will consider them to be valid targets for action. For data-centric organizations, the capabilities of data centers and core technical infrastructure will be pushed to the extreme, as business continuity and disaster recovery plans are put to the test like never before.
What are the Global Consequences of This Threat?
Extreme weather events have frightening consequences for people’s lives and have the potential to degrade or destroy critical infrastructure. From wildfires on the West Coast of the United States that wreck power lines, to extreme rainfall and flooding in South Asian communities that poison fresh water supplies and disrupt other critical services, the impacts of extreme weather are pronounced and deadly. They have severe ramifications for the availability of services and information – for example, in 2015 severe flooding in the UK city of Leeds caused a telecommunications data center to lose power, resulting in a large-scale outage.
According to the Intergovernmental Panel on Climate Change (IPCC), human-induced warming from fossil fuel usage, overbreeding of animals and deforestation will contribute to, and exacerbate, the damage caused by extreme weather events. The impact on human lives, infrastructure and organizations around the world will be destructive.
The probability and impact of extreme weather events are increasing and will soon spread to areas of the world that haven’t historically experienced them. Overall, up to 60% of locations across North America, Europe, East Asia and South America are expected to see a threefold increase in various extreme weather events over the coming years. Moreover, the US Federal Emergency Management Agency released new proposed flood maps along the west coast of Florida, showing that many companies that once assumed their data backup solutions were safe will find themselves struggling to deal with rising water levels. These increasingly volatile weather conditions will result in severe damage to infrastructure including telecommunication towers, pipelines, cables and data centers.
A study performed by the Uptime Institute found that 71% of organizations are not preparing for severe weather events and 45% are ignoring the risk of environmental disruption to their data centers, highlighting the need to take more action to ensure preparedness and resilience.
Data centers are some of the biggest users of energy in the world, using up to 416 terawatt hours of energy annually and accounting for 1–3% of the global electricity demand, doubling every four years. According to Greenpeace, only 20% of the energy used by data centers is from renewable resources. Criticism will soon turn to action, with environmental activists targeting organizations that use technical infrastructure that contributes towards harming the environment.
With the likelihood of extreme weather events increasing and becoming more damaging, organizations will be caught off guard, as their core infrastructure is crippled and CNI is taken offline. Combined with a greater scrutiny from environmental activists, data centers and core infrastructure will be put at risk.
How Should Your Business Prepare?
Extreme weather events, coupled with environmental activism, should prompt a fundamental re-examination of and re-investment in organizational resilience. It is critical that organizations risk assess their physical infrastructure and decide whether to relocate, harden it or transfer risk to cloud service providers.
In the short term, organizations should review risk exposure to extreme weather events, considering the location of data centers. Additionally, revise business continuity and disaster recovery plans and conduct a cyber security exercise with an extreme weather scenario.
In the long term, consider relocation of strategic assets that are at high risk and transfer risk to cloud or outsourced service providers. Finally, invest in infrastructure that is more durable in extreme weather conditions.
About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.Copyright 2010 Respective Author at Infosec Island
If you want to improve the security of your software—and you should—then you need the Building Security In Maturity Model (BSIMM), an annual report on the evolution of software security initiatives (SSIs). The latest iteration, BSIMM11, is based on observations of 130 participating companies, primarily in nine industry verticals and spanning multiple geographies.
The BSIMM examines software security activities, or controls, on which organizations are actually spending time and money. This real-world view—actual practices as opposed to someone’s idea of best practices—is reflected in the descriptions written for each of the 121 activities included in the BSIMM11.
Since the BSIMM is completely data-driven, this report is different from any earlier ones. That’s because the world of software security evolves. The changes in BSIMM11 reflect that evolution. Among them:
New software security activities
BSIMM10 added new activities to reflect the reality that some organizations were working on ways to speed up security to match the speed with which the business delivers functionality to market.
To those, BSIMM11 adds activities for implementing event-driven security testing and publishing risk data for deployable artifacts. Those directly reflect the ongoing DevOps and DevSecOps evolution and its intersection with traditional software security groups.
Don’t just shift left: Shift everywhere
When the BSIMM’s authors began writing about the concept of shifting left around 2006, it was addressing a niche audience. But the term rapidly became a mantra for product vendors and at security conferences, dominating presentations and panel discussions. At the February 2020 RSA conference in San Francisco, you couldn’t get through any of the sessions in the DevSecOps Days track without hearing it multiple times.
And the point is an important one: Don’t wait until the end of the SDLC to start looking for security vulnerabilities.
But the concept was never meant to be taken literally, as in “shift (only) left.”
“What we really meant is more accurately described as shift everywhere—to conduct an activity as quickly as possible, with the highest fidelity, as soon as the artifacts on which that activity depends are made available,” said Sammy Migues, principal scientist at Synopsys and a co-author of the BSIMM since its beginning.
Engineering demands security at speed
Perhaps you could call it moving security to the grassroots. Because while in some organizations tracked in the BSIMM there is only a small, centralized software security group focused primarily on governance, in a growing number of cases engineering teams now perform many of the software security efforts, including CloudSec, ContainerSec, DeploymentSec, ConfigSec, SecTools, OpsSec, and so on.
That is yielding mixed results. Being agile, those teams can perform those activities quickly, which is good, but it can be too fast for management teams to assess the impact on organizational risk. Not so good. Few organizations so far have completely harmonized centralized governance software security efforts and engineering software security efforts into a cohesive, explainable, defensible risk management program.
Still, engineering groups are making it clear that feature velocity is a priority. Security testing tools that run in cadence and invisibly in their toolchains—even free and open source tools—likely have more value today than more thorough commercial tools that create, or appear to create, more friction than benefit. The message: We’d love to have security in our value streams—if you don’t slow us down.
The cloud: Division of responsibility
The advantages of moving to the cloud are well known. It’s cheaper, it makes collaboration of a dispersed workforce easier, and it increases mobility, which is practically mandatory during an extended pandemic.
But using the cloud effectively also means outsourcing to the cloud vendor at least parts of your security architecture, feature provisioning, and other software security practice areas that are traditionally done locally.
As the BSIMM notes, “cloud providers are 100% responsible for providing security software for organizations to use, but the organizations are 100% responsible for software security.”
Digital transformation: Everybody’s doing it
Digital transformation efforts are pervasive, and software security is a key element of it at every level of an organization.
At the executive (SSI) level, the organization must move its technology stacks, processes, and people toward an automate-first strategy.
At the SSG level, the team must reduce analog debt, replacing documents and spreadsheets with governance as code.
At the engineering level, teams must integrate intelligence into their tooling, toolchains, environments, software, and everywhere else.
Security: Getting easier—and more difficult
Foundational software security activities are simultaneously getting easier and harder. Software inventory used to be an Excel spreadsheet with application names. It then became a (mostly out-of-date) configuration management database.
Now organizations need inventories of applications, APIs, microservices, open source, containers, glue code, orchestration code, configurations, source code, binary code, running applications, etc. Automation helps but there are an enormous number of moving parts.
“Primarily, we see this implemented as a significant acceleration in process automation, in applying some manner of intelligence through sensors to prevent people from becoming process blockers, and in the start of a cultural acceptance that going faster means that not everything (all desired security testing) can be done in-band of the delivery lifecycle,” Migues said.
Your roadmap to a better software security initiative starts here
There is much more detail in BSIMM11, which reports in depth on the 121 activities grouped under 12 practices that are, in turn, grouped under four domains: governance, intelligence, secure software development life cycle (SSDL) touchpoints, and deployment.
In addition to helping an organization start an SSI, the BSIMM also gives them a way to evaluate the maturity of their SSI, from “emerging,” or just starting; to “maturing,” meaning up and running, including some executive support and expectations; to “optimizing,” which describes organizations that are fine-tuning their existing security capabilities to match their risk appetite and right-size their investment for the desired posture.
Wherever organizations are on that journey, the BSIMM provides a roadmap to help them reach their goals.
About the author: Taylor Armerding is an award-winning journalist who has been comvering the field of information security for years.Copyright 2010 Respective Author at Infosec Island
The last several months have been the ultimate case study in workplace flexibility and adaptability. With the onset of the COVID-19 pandemic and widespread emergency activation plans through March and April, businesses large and small have all but abandoned their beautiful campuses and co-working environments. These communal, collaborative and in-person working experiences have been replaced by disparate remote environments that rely on a combination of video, chat and email to ease the transition and keep businesses productive.
The embrace of remote collaboration, and specifically video collaboration, has been swift and robust. In the first few months of the pandemic, downloads of video conferencing apps skyrocketed into the tens of millions, and traffic at many services surged anywhere from 10-fold to 100-fold. While uncertainty remains on what exactly a post-pandemic working experience will look like, it is without a doubt that video will remain a fundamental part of the collaboration tool kit.
While video has proven to be an effective bulwark against a disconnected workforce, the relative newness of the channel combined with its massive spike in popularity has revealed some fault lines. Most notably, several high-profile intrusions of ill-intended and disruptive individuals into private meetings. From a wider security perspective, this represents one of the most significant barriers to the long-term viability of video collaboration. Highly sensitive information and data are now shared over video – board meetings, product development brainstorms, sales reviews, negotiations – and the possibility that any of this information could be seen by the wrong eyes is a business-critical risk.
Yet, the vulnerabilities and threats presented by video conferencing are not insurmountable. In fact, there is a growing movement among CIOs and IT executives to further educate themselves on the nature of these platforms and identify the right solutions that fit the unique needs, opportunities and challenges of their businesses. As a result, there’s been a robust interest in encryption.
The most common forms of encryption protect data when it is most vulnerable: in transit between one system and another. However, in these common forms, communications are often not encrypted when they go through a variety of intermediaries, like internet or application service providers. That leaves them susceptible to intrusion at varying points. If just one link in the chain is weak – or broken entirely – the entire video stream could be compromised.
Comprehensive and thorough protection of sensitive data requires a more robust solution – what’s known as end-to-end encryption. That means only the authorized participants in a video chat are able to access the video or audio streams. Consider it the structural equivalent of a digital storage locker. You may rent the space from the provider, but only the approved participants have the key.
It is important to note that secure video conferencing isn’t only important for large enterprises. Startups and small businesses are just as (if not more) vulnerable and benefit greatly from setting a high bar for security. Whether it’s protecting customers, meeting standards for business partnerships or even leaning into security as an additional value-add, higher levels of security can profoundly impact the growth of an organization.
As the future of work relies increasingly on digital workplace tools like video conferencing, security-first instincts and strong encryption are essential to prevent malicious actors from disrupting business continuity and productivity amid times of uncertainty. Video conferencing has enabled dispersed teams to achieve new opportunities and has a bright future ahead of it. By infusing end-to-end encryption into any video strategy, it ensures not only the sustainability of the channel, but the businesses that rely on it.
About the author: Michael Armer is Vice President and Chief Information Security Officer at 8x8Copyright 2010 Respective Author at Infosec Island