Feed aggregator

New e-book: Establishing a proactive defense with Microsoft Security Exposure Management

Microsoft Malware Protection Center - Thu, 02/19/2026 - 12:00pm

Effective exposure management begins by illuminating and hardening risks across the entire attack surface. Some of the most meaningful shifts in security happen quietly—when teams take a clear look at their exposure landscape and acknowledge the gap between where they stand today and where they need to be. Today, we’re sharing a new guide designed to support that moment of clarity. It offers a practical, maturity-based path for moving from fragmented visibility and reactive fixes to a more unified, risk-driven approach that strengthens resilience one step at a time. Read “Establishing proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management” to learn more now. 

Get the full “Establishing proactive defense” e-book Five levels of exposure management maturity 

In the guide, you’ll learn how organizations progress through five levels of exposure management maturity to strengthen how they identify, prioritize, and act on risk. Early-stage teams operate reactively with limited visibility and compliance-driven fixes. As capabilities mature, processes become consistent, prioritization incorporates business context, and decisions shift from reactive to proactive. This progression reflects a move away from isolated security actions toward repeatable, measurable practices that scale with organizational complexity. At higher maturity, organizations validate controls, consolidate asset and risk data into a single source of truth, and confirm that mitigations work. Rather than assuming security improvements are effective, teams test and verify outcomes to ensure effort translates into real risk reduction. At the most advanced stage, exposure management is fully aligned to business objectives, supported by clear risk metrics, and used to guide remediation, resource allocation, and strategic outcomes.

Reduce risk and optimize your security posture with Microsoft Security Exposure Management

The maturity model helps security leaders assess where their organization is at and identify practical next steps to mature and have a full-fledged exposure management program. Each level in the guide includes details on the realities organizations face, the key characteristics at each maturity level, common pain points, and suggestions for moving forward and up in maturity. Importantly, the model emphasizes that maturity is not static or final. The last stage of the maturity model, level five, isn’t a finish line—it’s the point where exposure management becomes a continuously evolving capability, fueled by real-time telemetry and adaptive risk modeling. At this stage, exposure management shifts from a program to a strategic discipline—one that informs long-term resilience decisions rather than discrete remediation cycles. 

The path to proactive defense  

Organizations build a unified path to proactive defense when they move beyond fragmented tools and adopt an integrated exposure management approach. By bringing assets, identities, cloud posture, and attack paths into one coherent view, security teams gain the clarity needed to focus effort where it matters most. This alignment enables more consistent action, stronger prioritization, and security decisions that reflect real business risk instead of isolated signals. It also helps teams move from chasing individual findings to managing exposure systematically, with shared context across security, IT, and risk stakeholders. Over time, this shift turns exposure management into a repeatable operating model rather than a collection of disconnected responses. 

Take the next step toward proactive defense 

Designed to help security leaders translate strategy into practical next steps, regardless of where they are starting, the maturity levels outlined in the e-book support organizations as they shift from reacting to cyberthreats to proactively reducing risk and strengthening security across every layer of the environment. To go deeper into the practices, maturity levels, and actions that matter most, read the new e-book: Establishing a proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management to learn more now. 

Read the e-book: Establishing a proactive defense Join us at RSAC™ 2026

RSAC™ 2026 is more than a conference. It’s a chance to shape the future of security. By engaging with Microsoft Security, you’ll gain:  

  • Actionable insights from industry leaders and researchers.  
  • Hands-on experience with cutting-edge security tools.  
  • Connections that help you navigate the evolving cyberthreat landscape.  

Together, we can make the world safer for all. Join us in San Francisco March 22-26, 2026, and be part of the conversation that defines the next era of cybersecurity.  

Learn more

Learn more about Microsoft Security Exposure Management.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

Categories: Microsoft

Keeping Google Play & Android app ecosystems safe in 2025

Google Security Blog - Thu, 02/19/2026 - 12:00pm
Posted by Vijaya Kaza, VP and GM, App & Ecosystem Trust

The Android ecosystem is a thriving global community built on trust, giving billions of users the confidence to download the latest apps. In order to maintain that trust, we’re focused on ensuring that apps do not cause real-world harm, such as malware, financial fraud, hidden subscriptions, and privacy invasions. As bad actors leverage AI to change their tactics and launch increasingly sophisticated attacks, we’ve deepened our investments in AI and real-time defenses over the last year to maintain the upper hand and stop these threats before they reach users.

Upgrading Google Play’s AI-powered, multi-layered user protections

We’ve seen a clear impact from these safety efforts on Google Play. In 2025, we prevented over 1.75 million policy-violating apps from being published on Google Play and banned more than 80,000 bad developer accounts that attempted to publish harmful apps. These figures demonstrate how our proactive protections and push for a more accountable ecosystem are discouraging bad actors from publishing malicious apps, while our new tools help honest developers build compliant apps more easily. Initiatives like developer verification, mandatory pre-review checks, and testing requirements have raised the bar for the Google Play ecosystem, significantly reducing the paths for bad actors to enter.

User safety is at the core of everything we build. Over the years, we’ve continually introduced ways to help users stay safe and make informed app choices — from parental controls to data safety transparency and app badges. We’re constantly improving our policies and protections to encourage safe, high-quality apps on Google Play and stop bad actors before they cause harm.

Apps on Google Play undergo rigorous reviews for safety and compliance with our policies. Last year, we shared that Google Play runs over 10,000 safety checks on every app we publish, and we continue to check and recheck apps after they’ve been published. In 2025, we continued scaling our defenses even further by:

  • Boosting AI-enhanced app detection: We integrated Google’s latest generative AI models into our review process, helping our human review team continue to find complex malicious patterns faster.
  • Preventing unnecessary access to sensitive data: We prevented over 255,000 apps from getting excessive access to sensitive user data and continued to strengthen our privacy policies. Our commitment to privacy-forward app development, supported by tools like Play Policy Insights in Android Studio and Data safety section, has empowered developers to continue to: minimize privacy-sensitive permission requests, and prioritize the user in their design choices.
  • Blocking spam ratings and reviews: Whether they lead to review inflation or deflation, spam ratings and reviews can negatively impact our users’ trust and our developers’ growth. We’re continually evolving our detection models to help ensure app reviews are accurate. Our anti-spam protections blocked 160 million spam ratings and reviews last year, including inflated and deflated reviews. We also prevented an average 0.5-star rating drop for apps targeted by review bombing, protecting our users and developers from unhelpful reviews.
  • Safeguarding kids and families: Our approach to kids and families is built on the core belief that children deserve a safe, enriching digital environment. Our commitment is to empower parents with robust tools while providing children with access to high-quality, age-appropriate content. Last year, we announced new layers of protection, in addition to our existing safeguards, to prevent younger audiences from discovering or downloading apps involving activities like gambling or dating.
Enhancing Google Play Protect to help keep the entire Android ecosystem safe

We also continued to improve our protections for the broader Android ecosystem, by expanding Google Play Protect and real-time security measures like in-call scam protections to help keep users safe from scams, fraud, and other threats.

As Android’s built-in defense against malware and unwanted software, Google Play Protect now scans over 350 billion Android apps daily. This proactive protection constantly checks both Play apps and those from other sources to ensure they are not potentially harmful. And, last year, its real-time scanning capability identified more than 27 million new malicious apps from outside Google Play, warning users or blocking the app to neutralize the threat. To benefit from these protections, we recommend that users always keep Google Play Protect on.

While fraudsters are constantly evolving their tactics, Google Play Protect is evolving faster. Last year, we expanded:

  • Enhanced fraud protection: Google Play Protect’s enhanced fraud protection analyzes and automatically blocks the installation of apps that may abuse sensitive permissions to commit financial fraud. This protection is triggered when a user attempts to install an app from an "Internet-sideloading source" — such as a web browser or messaging app — that requests a sensitive permission. Building on the success of our initial pilot in Singapore, we expanded enhanced fraud protection to 185 markets, now covering more than 2.8 billion Android devices. In 2025, we blocked 266 million risky installation attempts and helped protect users from 872,000 unique, high-risk applications.
  • In-call scam protection: We also introduced new protections to combat social engineering attacks during phone calls. This feature preemptively disables the ability to turn off Google Play Protect during phone calls, stopping bad actors from being able to trick users into disabling their device's built-in defenses to download a malicious app while on a call.
Partnering with developers for a more secure, privacy-friendly future

Keeping Android and Google Play safe requires deep collaboration. We want to thank our global developer community for their partnership and for sharing their feedback on the tools and support they need to succeed.

In 2025, we focused on reducing friction for developers and providing them with tools to safeguard their businesses:

  • Building safer apps more easily: We’re helping developers streamline their work by bringing insights directly into their natural workflows. It starts with Play Policy Insights in Android Studio, which gives developers real-time feedback as they code. We focused first on permissions and APIs that grant deeper system access or handle personal data, like location or photos. This gives developers a head start on policy requirements, including prominent disclosures or usage declarations, while they’re still building. When developers move to Play Console to prepare their apps for submission, our expanded pre-review checks help catch common reasons for rejection, like improper usage of credentials or permissions and broken privacy policy links, ensuring smoother, faster reviews.
  • Stronger threat detection with Play Integrity API: Every day, apps and games make over 20 billion checks with Play Integrity API to protect against abuse and unauthorized access. In 2025, we added hardware-backed signals to make it even harder for bad actors to spoof devices and introduced new in-app prompts that let users fix common issues like network errors without leaving the app. We also launched device recall in beta to help developers identify repeat bad actors even after a device has been reset, all while protecting user privacy.
  • Building trust through developer verification: We’ve seen how effective developer verification is on Google Play, and now we’re applying those lessons to the broader Android ecosystem. By ensuring there is a real, accountable identity behind every app, verification helps legitimize authentic developers and prevents bad actors from hiding behind anonymity to repeatedly cause harm. After gathering feedback during our early access period, we’ll open up verification to all developers this year. We’ve also added a dedicated account type for students and hobbyists, which will allow them to distribute these apps to a limited number of devices without the full verification requirements.
  • Greater security with every Android release: In Android 16, developers can protect users’ most private information, like bank logins, with just one line of code. We’ve integrated this feature automatically to certain apps for an instant security boost against “tapjacking,” a trick where bad apps use hidden layers to steal clicks for ad fraud.
Looking ahead

Our top priority remains making Google Play and Android the most trusted app ecosystems for everyone. This year, we’ll continue to invest in AI-driven defenses to stay ahead of emerging threats and equip Android developers with the tools they need to build apps safely. To empower developers who distribute their apps on Google Play, we’ll maintain our focus on embedding checks to help build apps that are compliant by design, while providing guidance to help proactively avoid policy violations before an app is published. We’ll also roll out Android developer verifications to hold bad actors accountable and prevent them from hiding behind anonymity to cause repeated harm.

Thank you for being part of the Google Play and Android community as we work together to build a safer app ecosystem.

Categories: Google Security Blog

Running OpenClaw safely: identity, isolation, and runtime risk

Microsoft Malware Protection Center - Thu, 02/19/2026 - 11:27am

Self-hosted agent runtimes like OpenClaw are showing up fast in enterprise pilots, and they introduce a blunt reality: OpenClaw includes limited built-in security controls. The runtime can ingest untrusted text, download and execute skills (i.e. code) from external sources, and perform actions using the credentials assigned to it.

This effectively shifts the execution boundary from static application code to dynamically supplied content and third-party capabilities, without equivalent controls around identity, input handling, or privilege scoping.

In an unguarded deployment, three risks materialize quickly:

  • Credentials and accessible data may be exposed or exfiltrated.
  • The agent’s persistent state or “memory” can be modified, causing it to follow attacker-supplied instructions over time.
  • The host environment can be compromised if the agent is induced to retrieve and execute malicious code.

Because of these characteristics, OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation. If an organization determines that OpenClaw must be evaluated, it should be deployed only in a fully isolated environment such as a dedicated virtual machine or separate physical system. The runtime should use dedicated, non-privileged credentials and access only non-sensitive data. Continuous monitoring and a rebuild plan should be part of the operating model.

This post explains how the two supply chains inherent to self-hosted agents — untrusted code (skills and extensions) and untrusted instructions (external text inputs) — converge into a single execution loop. We examine how this design creates compounding risk in workstation environments, provide a representative compromise chain, and outline deployment, monitoring, and hunting guidance aligned to Microsoft Security controls, including Microsoft Defender XDR. For organizations that still choose to evaluate OpenClaw, we include a minimum safe operating posture.

Clarifying the landscape: runtime vs platform

To reason about controls and avoid applying the wrong mitigations in the wrong place, it is important to separate where code executes from where instructions propagate. These two surfaces are often discussed together, but they behave differently under attack and are typically owned by different teams.

OpenClaw (runtime): A self-hosted agent runtime that runs on a workstation, VM, or container. It can load skills and interact with local and cloud resources. The key security point: it inherits the trust (and risk) of the machine and the identities it can use. Installing a skill is basically installing privileged code. Skills are often discovered and installed through ClawHub, the public skills registry for OpenClaw. With that said, OpenClaw works within the access users grant on their devices. If it has permission to reach certain apps, files, or accounts, it may be able to retrieve additional information from them. For privacy and security considerations, Microsoft Defender recommends using OpenClaw only in isolated environments that do not have access to any non-dedicated credentials or data which must not be leaked.

Moltbook (platform): An agent-focused platform and identity layer where agents post, read, and authenticate through APIs. The key security point is that it can become a high-volume stream of attacker-influenceable content that agents ingest on a schedule. A single malicious post can therefore reach multiple agents.

In practice, OpenClaw expands the code execution boundary within your environment, while Moltbook expands the instruction influence surface at scale. When these two interact without appropriate guardrails, a single malicious input can result in durable, credentialed execution.

How agents shift the security boundary

Most security teams already know how to secure automation. Agents change the risk because the entity deciding what to do isn’t always the one taking the action. At runtime, the agent loads third‑party code, reads untrusted input, and acts using durable credentials, making the runtime environment the new security boundary.

That boundary has three components:

  • Identity: The tokens the agent uses to do work (SaaS APIs, repos, mail, cloud control planes).
  • Execution: The tools it can run that change state (files, shell, infrastructure, messages).
  • Persistence: The ways it can keep changes across runs (tasks, config, schedules).

To summarize, there are two types of security problems called out here:

  1. Indirect prompt injection: Attackers can hide malicious instructions inside content an agent reads and can either steer tool use or modify its memory to affect its behavior over time unless users put strong boundaries in place.
  2. Skill malware: Agents acquire skills from a variety of sources, basically by downloading and running code off the Internet, and can contain malicious code.
Managed platforms Vs. self-hosted runtimes

With managed assistants and agent platforms, security controls typically center on identity scopes, connector governance, and data boundaries, because the runtime and updates are centrally managed. With self-hosted runtimes, that responsibility shifts to the organization. The host system, plugin surface, and local state become part of the trust boundary, and the runtime often operates in close proximity to sensitive developer credentials.

With a self-hosted runtime, you are responsible for the blast radius. The host, plugins, and local state are all within the trust boundary. If the agent is able to browse external content and install extensions, it should be assumed that it will eventually process malicious input. Controls should therefore prioritize containment and recoverability, rather than relying on prevention alone.

End-to-end attack scenario: The poisoned skill

This scenario represents a plausible compromise chain in open agent ecosystems. It maps directly to control points defenders can influence: what is installed, what the runtime can access, and how persistence is established. Public reporting has documented malicious skills appearing in public registries. In some cases, registry content has been straightforward malware packaged as a skill, rather than a subtle lookalike.

Figure 1: A five-step flow showing how a malicious skill moves from public distribution to durable control, often through configuration or state changes rather than a traditional malware drop. Step 1: Distribution

An attacker publishes a malicious skill to ClawHub, sometimes disguised as a utility and sometimes openly malicious, and promotes it through community channels. In other cases, the skill is discovered organically through search and installed because the ecosystem evolves quickly and low-friction installation encourages experimentation. This creates a direct code supply chain path into the runtime.

Step 2: Installation

A developer or an agent initiates installation because the skill appears relevant to a task. In permissive deployments, the runtime may be allowed to execute the installation flow without human approval. In more controlled environments, installation should be treated as an explicit approval event, equivalent to executing third-party code.

Step 3: State access (tokens and durable instructions)

The attacker’s objective is access to agent state, including tokens, cached credentials, configuration data, and transcripts, as well as durable instruction channels that influence future runs, such as task files, scheduled actions, or agent configuration. If durable instructions can be modified through normal interactions, a single injection can persist across executions.

Step 4: Privilege reuse through legitimate APIs

With valid identity material, the attacker can perform actions through standard APIs and tooling. This activity often resembles legitimate automation unless strong monitoring and logging controls are in place.

Step 5: Persistence through configuration

Persistence frequently manifests as durable configuration changes, such as new OAuth consents, scheduled executions, modified agent tasks, or tools that remain permanently approved. The objective is less about deploying malware and more about maintaining long-term control over the automation pathway.

Variant: indirect prompt injection through shared feeds 

If agents are configured to poll a shared feed, an attacker can place malicious instructions inside content the agents ingest. This is indirect prompt injection: the payload rides in the instruction supply chain, embedded in external content rather than provided by a trusted operator. In multi-agent settings, a single malicious thread can reach many agents at once. The practical risk is steering tool use or triggering sensitive disclosure in the subset of agents that have high authority and weak gating. 

Microsoft Defender and Microsoft Security controls for self-hosted agents Minimum safe operating posture (if you choose to run OpenClaw)

The safest guidance is to avoid installing and running OpenClaw with primary work or personal accounts and to avoid running it on a device that contains sensitive data. In its current form, assume the runtime can be influenced by untrusted input, its state can be modified, and the host system can be exposed through the agent.

If there is a legitimate requirement to evaluate OpenClaw, the following guardrails should be treated as a baseline:

1) Run only in isolation

Use a dedicated virtual machine or a separate physical device that is not used for daily work. Treat the environment as disposable.

2) Use dedicated credentials and non-sensitive data

Create accounts, tokens, and datasets that exist solely for the agent’s purpose. Assume compromise is possible and plan for regular rotation.

3) Monitor for state or memory manipulation

Regularly review the agent’s saved instructions and state for unexpected persistent rules, newly trusted sources, or changes in behavior across runs.

4) Back up state to enable rapid rebuild

OpenClaw allows state to be snapshotted and restored:

  • Backing up .openclaw/workspace/ captures the agent’s working state without including credentials.
  • Backing up the entire .openclaw/ directory also captures tokens and credentials. While this simplifies restoration, it increases backup sensitivity and may be inappropriate if credentials are suspected to be compromised.

5) Treat rebuild as an expected control

Reinstall regularly and rebuild immediately if anomalous behavior is observed. Persistence may appear as subtle configuration changes rather than overt malware deployment.

The table below maps key security actions to concrete implementation approaches using Microsoft Security solutions and related Microsoft controls. Links to implementation of guidance for the Microsoft controls referenced are provided in the References section. 

What to do How to do it with Microsoft controls Identity Use dedicated identities for agents. Minimize permissions. Prefer short-lived tokens. Use controlled consent for powerful permissions. Microsoft Entra ID: Enforce least privilege, Conditional Access, and Admin consent workflows for sensitive OAuth scopes. Microsoft Defender for Cloud Apps (App Governance): inventory of OAuth apps, monitor consent drift, and alert on risky publishers or privilege levels. Endpoint and host Treat agent hosts privileged. Separate pilots from production. Plan rapid isolation and token revocation. Microsoft Defender for Endpoint: Onboard agent hosts and use device groups for stricter policies. Microsoft Defender XDR: correlate endpoint activity with identity and cloud events for fast triage and containment. Supply chain (skills, extensions, plugins) Restrict install sources and publishers where possible. Pin versions for approved capabilities. Review updates. Microsoft Defender for Endpoint: use telemetry and investigation to spot suspicious extension installs and remote access tooling. Endpoint management and app control: restrict unapproved install paths and publishers where feasible. Network and egress Restrict outbound access for agent hosts and workloads to known destinations required for business. Block or isolate high-risk external ingestion sources unless justified. Defender for Endpoint web content filtering: restrict categories and access to agent device groups. Azure network controls and Defender for Cloud: Apply network controls in Azure and monitor outbound behavior with central logging. Data protection Reduce the chance that sensitive data is ingested into agent prompts. Reduce the chance that sensitive data is exfiltrated by agent tools. Microsoft Purview: Use sensitivity labeling and Endpoint DLP to audit or block movement of labeled data by agent processes and external destinations. Monitoring and response Log agent actions and treat abnormal tool use as an incident signal. Prepare a playbook for agent identity compromises. Microsoft Defender XDR: Use hunting and incident correlation. Microsoft Sentinel: Use it when deeper retention, enrichment, and automation are needed. Operational playbooks: build playbooks around isolation, credential rotation, consent review, and workspace forensics.  Hunting queries and triage guidance (Microsoft Defender XDR) 

These hunting queries are designed to quickly surface where agent runtimes are operating across the environment and to help distinguish deployments that function as privileged automation from those reflecting normal, user-driven behavior, enabling faster scoping, prioritization, and response. 

Hunt 1: Discover agent runtimes and related tooling 

Use this to inventory where agent runtimes exist, and which identities and command lines they run under. 

DeviceProcessEvents | where Timestamp > ago(30d) | where ProcessCommandLine has_any ("openclaw","moltbot","clawdbot") or FileName has_any ("openclaw","moltbot","clawdbot") | project Timestamp, DeviceName, AccountName=InitiatingProcessAccountName, FileName, FolderPath, ProcessCommandLine | order by Timestamp desc

Triage: confirm the device is part of an approved pilot, validate any control interface exposure is restricted, and review recent installs if the runtime is unexpected. 

Hunt 1b: Cloud workloads variant (CloudProcessEvents) 

Use this to extend the same inventory to container and Kubernetes workloads that report process telemetry through Defender for Cloud integration. 

Use this when agent runtimes may be running in multicloud container environments onboarded through Defender for Cloud so process telemetry lands in CloudProcessEvents. 

CloudProcessEvents | where Timestamp > ago(30d) | where ProcessCommandLine has_any ("openclaw","moltbot","clawdbot") or ProcessName has_any ("openclaw","moltbot","clawdbot") or FileName has_any ("openclaw","moltbot","clawdbot") | extend WorkloadId = coalesce(AzureResourceId, AwsResourceName, GcpFullResourceName) | project Timestamp, WorkloadId, KubernetesNamespace, KubernetesPodName, ContainerName, AccountName, ProcessName, Filenames, FolderPath, ProcessCommandLine | order by Timestamp desc

Triage: validate the workload and namespace map to an approved pilot, confirm container image provenance, and verify that the process and command line are expected for that service. 

Hunt 1c: ClawHub skill installs and low-prevalence skill slugs  

Use this to identify ClawHub skill installs and surface rare skill slugs across your environment. 

DeviceProcessEvents | where Timestamp > ago(30d) | where ProcessCommandLine has "clawhub install" | extend SkillSlug = extract(@"\bclawhub\s+install\s+([^\s]+)", 1, ProcessCommandLine) | where isnotempty(SkillSlug) | summarize InstallEvents=count(), Devices=dcount(DeviceName), Accounts=dcount(InitiatingProcessAccountName) by SkillSlug | order by Devices asc, InstallEvents desc

Triage: validate that the skill is approved for the pilot, then review the installed skill folder content and correlate with follow-on activity such as new shells, download tools, or outbound connections. Compare the slug against an approved list to catch lookalike naming.

Hunt 2: Extension installs and churn on developer endpoints 

Use this to detect extension churn on developer endpoints that often precedes suspicious execution. 

DeviceFileEvents | where Timestamp > ago(30d) | where FolderPath has_any (@"\.vscode\extensions\", @"/.vscode/extensions/") | where ActionType in ("FileCreated","FileModified","FolderCreated") | summarize FirstSeen=min(Timestamp), LastSeen=max(Timestamp), FileCount=count() by DeviceName, InitiatingProcessAccountName, FolderPath | order by LastSeen desc

Triage: focus on newly created extension folders and unexpected modification bursts. Validate publisher and installation source, then examine what processes the extension spawned. 

Hunt 3: High-privilege OAuth apps and consent drift (App Governance) 

Use this to surface new or changed high-privilege OAuth apps associated with agent integrations (requires App Governance). 

Prerequisite: App Governance must be enabled, so OAuthAppInfo is populated. 

OAuthAppInfo | where Timestamp > ago(30d) | where PrivilegeLevel =~ "High" | project Timestamp, AppName, VerifiedPublisher, AppOrigin, IsAdminConsented, ConsentedUsersCount, AppStatus, Permissions | order by Timestamp desc

Triage: validate business need for high-privilege apps, confirm publisher identity, and investigate sudden changes in privileges or consent scope. 

Hunt 4: Unexpected listening services created by agent processes 

Use this to detect agent processes opening listening ports, which can indicate exposed control surfaces or unintended services. 

DeviceNetworkEvents | where Timestamp > ago(30d) | where ActionType == "ListeningConnectionCreated" | where InitiatingProcessCommandLine has_any ("openclaw","moltbot","clawdbot") or InitiatingProcessFileName has_any ("openclaw","moltbot","clawdbot") | summarize FirstSeen=min(Timestamp), LastSeen=max(Timestamp), Ports=make_set(LocalPort) by DeviceName, InitiatingProcessFileName, InitiatingProcessAccountName, LocalIP | order by Timestamp desc

Triage: validate whether the listener is required and restricted. If it is reachable beyond the intended boundary, isolate the host and rotate any identities used by the agent. 

Hunt 5: Agent runtimes spawning unexpected shells or download tools 

Use this to flag agent runtimes spawning shells or download tools that are uncommon in expected agent operation. 

agent operation. DeviceProcessEvents | where Timestamp > ago(30d) | where InitiatingProcessFileName has_any ("openclaw","moltbot","clawdbot") or InitiatingProcessCommandLine has_any ("openclaw","moltbot","clawdbot") | where FileName in ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","curl","wget") | project Timestamp, DeviceName, AccountName=InitiatingProcessAccountName, Parent=InitiatingProcessFileName, FileName, ProcessCommandLine | order by Timestamp desc

Triage: Separate expected automation from opportunistic execution. Prioritize cases where the child process touches credential stores, installs new packages, or opens network connections to unusual destinations. 

Security implications for self-hosted agents

Self-hosted agents combine untrusted code and untrusted instructions into a single execution loop that runs with valid credentials. That is the core risk.

Running OpenClaw is not simply a configuration choice. It is a trust decision about which machine, identities, and data you are prepared to expose when the agent processes untrusted input.

For most environments, the appropriate decision may be not to deploy it. If a team proceeds, the defensible posture is to assume compromise is possible: isolate the runtime, constrain what it can access, monitor it continuously, and be prepared to rebuild without delay.

Three actions should be taken immediately: inventory where the runtime is deployed, verify the identities it uses and the permissions associated with them, and identify which inputs can influence tool execution. Tighten controls accordingly and monitor activity end to end. Use the hunting queries provided as a starting point, and treat every finding as an opportunity to reduce blast radius before it is exploited.

References
  • Microsoft Defender XDR Advanced Hunting overview (how to run hunts): https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-overview 
  • CloudProcessEvents table reference: https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-cloudprocessevents-table 
  • OAuthAppInfo table reference and prerequisites: https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-oauthappinfo-table 
  • Web content filtering in Defender for Endpoint: https://learn.microsoft.com/en-us/defender-endpoint/web-content-filtering 
  • Entra admin consent workflow overview: https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/admin-consent-workflow-overview 
  • Conditional Access overview: https://learn.microsoft.com/en-us/entra/identity/conditional-access/overview 
  • Defender for Cloud Apps App Governance overview: https://learn.microsoft.com/en-us/defender-cloud-apps/app-governance 
  • Microsoft Purview Endpoint DLP overview: https://learn.microsoft.com/en-us/purview/endpoint-dlp-learn-about 

This research is provided by Microsoft Defender Security Research with contributions from Idan Hen.

Learn more 

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Running OpenClaw safely: identity, isolation, and runtime risk appeared first on Microsoft Security Blog.

Categories: Microsoft

HR Teams Are Drowning in Slop Grievances

SlashDot - Thu, 02/19/2026 - 11:15am
Categories: SlashDot

Government officials form the US and Europe have condemned UN special rapporteur Francesca Albanese for remarks about Israel she never made, based on a truncated clip circulating online that takes her statements out of context

Computer Weekly Feed - Thu, 02/19/2026 - 11:15am
Government officials form the US and Europe have condemned UN special rapporteur Francesca Albanese for remarks about Israel she never made, based on a truncated clip circulating online that takes her statements out of context
Categories: Computer Weekly

The Coding Agent Is Dead

Hacker News - Thu, 02/19/2026 - 11:05am
Categories: Hacker News

Baseline Drift

Hacker News - Thu, 02/19/2026 - 11:00am
Categories: Hacker News

Pages