Feed aggregator
Telegram launches full-screen Mini Apps
Article URL: https://telegram.org/blog/fullscreen-miniapps-and-more
Comments URL: https://news.ycombinator.com/item?id=42183664
Points: 1
# Comments: 0
Romans rebel against Colosseum and Airbnb's plans to stage gladiatorial battles
Tucker Carlson and Russ Vought Talk Government Control, Doge and Trump's Cabinet [video]
Article URL: https://www.youtube.com/watch?v=vydAb4RR1iI
Comments URL: https://news.ycombinator.com/item?id=42183611
Points: 1
# Comments: 0
Vivek's 'jackhammer and chain saw' plan to force federal workers back to office
A compact elastocaloric refrigerator (2022)
Article URL: https://www.sciencedirect.com/science/article/pii/S2666675822000017
Comments URL: https://news.ycombinator.com/item?id=42183605
Points: 1
# Comments: 0
State of European Tech 2024
Article URL: https://www.stateofeuropeantech.com/
Comments URL: https://news.ycombinator.com/item?id=42183604
Points: 1
# Comments: 0
An Infinite Stream of AI Generated Baseball Games
Article URL: https://infinitebaseball.ai/
Comments URL: https://news.ycombinator.com/item?id=42183589
Points: 1
# Comments: 0
The 6 Biggest Turkey Mistakes Made on Thanksgiving, According to an Insider
DOJ Will Push Google to Sell Chrome to Break Search Monopoly
Article URL: https://news.bloomberglaw.com/us-law-week/doj-will-push-google-to-sell-off-chrome-to-break-search-monopoly
Comments URL: https://news.ycombinator.com/item?id=42183579
Points: 2
# Comments: 0
Amazon Music Unlimited Subscribers Can Now Listen to Audible Audiobooks
'Alice in Borderland 3', 'Senna' Among Netflix's Upcoming International Releases
Live in an Old House or Apartment? Your Gaming PC May Be a Fire Hazard
Darth Vader and Columbia Sportswear Team Up to Keep You Warm This Winter
Best Smart Displays of 2024
AI is everywhere, and Boomers don’t trust it
Artificial intelligence tools like ChatGPT, Claude, Google Gemini, and Meta AI represent a stronger threat to data privacy than the social media juggernauts that cemented themselves in the past two decades, according to new research on the sentiments of older individuals from Malwarebytes.
A combined 54% of people between the ages of 60 and 78 told Malwarebytes that they “agree” or “strongly agree” that ChatGPT and similar generative AI tools “are more of a threat than social media platforms (e.g., Facebook, Twitter/X, etc.) concerning personal data misuse.” And an even larger share of 82% said they “agree” or “strongly agree” that they are “concerned with the security and privacy of my personal data and those I interact with when using AI tools.”
The findings arrive at an important time for consumers, as AI developers increasingly integrate their tools into everyday online life—from Meta suggesting that users lean on AI to write direct messages on Instagram to Google forcing users by default to receive “Gemini” results for basic searches. With little choice in the matter, consumers are responding with robust pushback.
For this research, Malwarebytes conducted a pulse survey of its newsletter readers in October via the Alchemer Survey Platform. In total, 851 people across the globe responded. Malwarebytes then focused its analysis on survey participants who belong to the Baby Boomer generation.
Malwarebytes found that:
- 35% of Baby Boomers said they know “just the names” of some of the largest generative AI products, such as ChatGPT, Google Gemini, and Meta AI.
- 71% of Baby Boomers said they have “never used” any generative AI tools—a seeming impossibility as Google search results, by default, now provide “AI overviews” powered by the company’s Gemini product.
- Only 12% of Baby Boomers believe that “generative AI tools are good for society.”
- More than 80% of Baby Boomers said that they worry about generative AI tools both improperly accessing their data and misusing their personal information.
- While more than 50% of Baby Boomers said they would feel more secure in using generative AI tools if the companies behind them provided regular security audits, a full 23% were unmoved by proposals in transparency or government regulation.
Since San Francisco-based AI developer OpenAI released ChatGPT two years ago to the public, “generative” artificial intelligence has spread into nearly every corner of online life.
Countless companies have integrated the technology into their customer support services with the help of AI-powered chatbots (which caused a problem for one California car dealer when its own AI chat bot promised to sell a customer a 2024 Chevy Tahoe for just $1). Emotional support and mental health providers have toyed with having their clients speak directly with AI chatbots when experiencing a crisis (to middling results). Audio production companies now advertise features to generate spoken text based off samples of recorded podcasts, art-sharing platforms regularly face scandals of AI-generated “stolen” work, and even AI “girlfriends”—and their scantily-clad, AI-generated avatars—are on offer today.
The public are unconvinced.
According to Malwarebytes’ research, Baby Boomers do not trust generative AI, the companies making it, or the tools that implement it.
A full 75% of Baby Boomers said they “agree” or “strongly agree” that they are “fearful of what the future will bring with AI.” Those sentiments are reflected in the 47% of Baby Boomers who said they “disagree” or “strongly disagree” that “generative AI tools are good for society.”
In particular, Baby Boomers shared a broad concern over how these tools—and the developers behind them—collect and use their data.
More than 80% of Baby Boomers agreed that they held the following concerns about generative AI tools:
- My data being accessed without my permission (86%)
- My personal information being misused (85%)
- Not having control over my data (84%)
- A lack of transparency into how my data is being used (84%)
The impact on behavior here is immediate, as 71% of Baby Boomers said they “refrain from including certain data/information (e.g., names, metrics) when using generative AI tools due to concerns over security or privacy.”
The companies behind these AI tools also have yet to win over Baby Boomers, as 87% said they “disagree” or “strongly disagree” that they “trust generative AI companies to be transparent about potential biases in their systems.”
Perhaps this nearly uniform distrust in generative AI—in the technology itself, in its implementation, and in its developers—is at the root of a broad disinterest from Baby Boomers. An enormous share of this population, at 71%, said they had never used these tools before.
The statistic is difficult to believe, primarily because Google began powering everyday search requests with its own AI tool back in May 2024. Now, when users ask a simple question on Google, they will receive an “AI overview” at the top of their results. This functionality is powered by Gemini—Google’s own tool that, much like ChatGPT, can generate images, answer questions, fine-tune recipes, and deliver workout routines.
Whether or not users know about this, and whether they consider this “using” generative AI, is unclear. What is clear, however, is that a generative AI tool created by one of the largest companies in the world is being pushed into the daily workstreams of a population that is unconvinced, uncomfortable, and unsold on the entire experiment.
Few paths to improvementCoupled with the high levels of distrust that Baby Boomers have for generative AI are widespread feelings that many corrective measures would have little impact.
Baby Boomers were asked about a variety of restrictions, regulations, and external controls that would make them “feel more secure about using generative AI tools,” but few of those controls gained mass approval.
For instance, “detailed reports on how data is stored and used” only gained the interest of 44% of Baby Boomers, and “government regulation” ranked even lower, with just 35% of survey participants. “Regular security audits by third parties” and “clear information on what data is collected” piqued the interest of 52% and 53% of Baby Boomers, respectively, but perhaps the most revealing answers came from the suggestions that the survey participants wrote in themselves.
Several participants specifically asked for the ability to delete any personal data ingested by the AI tools, and other participants tied their distrust to today’s model of online corporate success, believing that any large company will collect and sell their data to stay afloat.
But frequently, participants also said they could not be swayed at all to use generative AI. As one respondent wrote:
“There is nothing that would make me comfortable with it.”
Whether Baby Boomers represent a desirable customer segment for AI developers is unknown, but for many survey participants, that likely doesn’t matter. It’s already too late.
Best Internet Providers in Newark, New Jersey
AI innovations for a more secure future unveiled at Microsoft Ignite
In today’s rapidly changing cyberthreat landscape, influenced by global events and AI advancements, security must be top of mind. Over the past three years, password cyberattacks have surged from 579 to more than 7,000 per second, nearly doubling in the last year alone.¹ New cyberattack methods challenge our security posture, pushing us to reimagine how the global security community defends organizations.
At Microsoft, we remain steadfast in our commitment to security, which continues to be our top priority. Through our Secure Future Initiative (SFI), we’ve dedicated the equivalent of 34,000 full-time engineers to the effort, making it the largest cybersecurity engineering project in history—driving continuous improvement in our cyber resilience. In our latest update, we share insights into the work we are doing in culture, governance, and cybernorms to promote transparency and better support our customers in this new era of security. For each engineering pillar, we provide details on steps taken to reduce risk and provide guidance so customers can do the same.
Insights gained from SFI help us continue to harden our security posture and product development. At Microsoft Ignite 2024, we are pleased to unveil new security solutions, an industry-leading bug bounty program, and innovations in our AI platform.
Learn more about the Secure Future Initiative Transforming security with graph-based posture managementMicrosoft’s Security Fellow and Deputy Chief Information Security Office (CISO) John Lambert says, “Defenders think in lists, cyberattackers think in graphs. As long as this is true, attackers win,” referring to cyberattackers’ relentless focus on the relationships between things like identities, files, and devices. Exploiting these relationships helps criminals and spies do more extensive damage beyond the point of intrusion. Poor visibility and understanding of relationships and pathways between entities can limit traditional security solutions to defending in siloes, unable to detect or disrupt advanced persistent threats (APTs).
We are excited to announce the general availability of Microsoft Security Exposure Management. This innovative solution dynamically maps changing relationships between critical assets such as devices, data, identities, and other connections. Powered by our security graph, and now with third-party connectors for Rapid 7, ServiceNow, Qualys, and Tenable in preview, Exposure Management provides customers with a comprehensive, dynamic view of their IT assets and potential cyberattack paths. This empowers security teams to be more proactive with an end-to-end exposure management solution. In the constantly evolving cyberthreat landscape, defenders need tools that can quickly identify signal from noise and help prioritize critical tasks.
Beyond seeing potential cyberattack paths, Exposure Management also helps security and IT teams measure the effectiveness of their cyber hygiene and security initiatives such as zero trust, cloud security, and more. Currently, customers are using Exposure Management in more than 70,000 cloud tenants to proactively protect critical entities and measure their cybersecurity effectiveness.
Explore Microsoft Security Exposure Management Announcing $4 million AI and cloud security bug bounty “Zero Day Quest”Born out of our Secure Future Initiative commitments and our belief that security is a team sport, we also announced Zero Day Quest, the industry’s largest public security research event. We have a long history of partnering across the industry to mitigate potential issues before they impact our customers, which also helps us build more secure products by default and by design.
Every year our bug bounty program pays millions for high-quality security research with over $16 million awarded last year. Zero Day Quest will build on this work with an additional $4 million in potential rewards focused on cloud and AI—— which are areas of highest impact to our customers. We are also committed to collaborating with the security community by providing access to our engineers and AI red teams. The quest starts now and will culminate in an in-person hacking event in 2025.
As part of our ongoing commitment to transparency, we will share the details of the critical bugs once they are fixed so the whole industry can learn from them—after all, security is a team sport.
Learn more about Zero Day Quest New advances for securing AI and new skills for Security CopilotAI adoption is rapidly outpacing many other technologies in the digital era. Our generative AI solution, Microsoft Security Copilot, continues to be adopted by security teams to boost productivity and effectiveness. Organizations in every industry, including National Australia Bank, Intesa Sanpaolo, Oregon State University, and Eastman are able to perform security tasks faster and more accurately.² A recent study found that three months after adopting Security Copilot, organizations saw a 30% reduction in their mean time to resolve security incidents. More than 100 partners have integrated with Security Copilot to enrich the insights with ecosystem data. New Copilot skills are now available for IT admins in Microsoft Entra and Microsoft Intune, data security and compliance teams in Microsoft Purview, and security operations teams in the Microsoft Defender product family.
Discover more with Microsoft Security CopilotAccording to our Security for AI team’s new “Accelerate AI transformation with strong security” white paper, we found that over 95% of organizations surveyed are either already using or developing generative AI, or they plan to do so in the future, with two thirds (66%) choosing to develop multiple AI apps of their own. This fast-paced adoption has led to 37 new AI-related bills passed into law worldwide in 2023, reflecting a growing international effort to address the security, safety, compliance, and transparency challenges posed by AI technologies.³ This underscores the criticality of securing and governing the data that fuels AI. Through Microsoft Defender, our customers have discovered and secured more than 750,000 generative AI app instances and Microsoft Purview has audited more than a billion Copilot interactions.⁴
Microsoft Purview is already helping thousands of organizations, such as Cummins, KPMG, and Auburn University, with their AI transformation by providing data security and compliance capabilities across Microsoft and third-party applications. Now, we’re announcing new capabilities in Microsoft Purview to discover, protect, and govern data in generative AI applications. Available for preview, new capabilities in Purview include Data Loss Prevention (DLP) for Microsoft 365 Copilot, prevention of data oversharing in AI apps, and detection of risky AI use such as malicious intent, prompt injections, and misuse of protected materials. Additionally, Microsoft Purview now includes Data Security Posture Management (DSPM) that gives customers a single pane of glass to proactively discover data risks, such as sensitive data in user prompts, and receive recommended actions and insights for quick responses during incidents. For more details, read the blog on Tech Community.
Explore Microsoft PurviewMicrosoft continues to innovate on our end-to-end security platform to help defenders make the complex simpler, while staying ahead of cyberthreats and enabling their AI transformation. At the same time, we are continuously improving the safety and security of our cloud services and other technologies, including these recent steps to make Windows 11 more secure.
Next steps with Microsoft SecurityFrom the advances announced to our daily defense of customers, and the steadfast dedication of Chief Executive Officer (CEO) Satya Nadella and every employee, security remains our top priority at Microsoft as we deliver on our principles of secure by design, secure by default, and secure operations. To learn more about our vision for the future of security, tune in to the Microsoft Ignite keynote.
Microsoft Ignite 2024Gain insights to keep your organizations safer with an AI-first, end-to-end cybersecurity approach.
Register nowAre you a regular user of Microsoft Security products? Review your experience on Gartner Peer Insights™ and get a $25 gift card. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
¹ Microsoft Digital Defense Report 2024.
² Microsoft customer stories:
- National Australia Bank invests in an efficient, cloud-managed future with Windows 11 Enterprise
- Intesa Sanpaolo accrues big cybersecurity dividends with Microsoft Sentinel, Copilot for Security
- Oregon State University protects vital research and sensitive data with Microsoft Sentinel and Microsoft Defender
- Eastman catalyzes cybersecurity defenses with Copilot for Security
³ How countries around the world are trying to regulate artificial intelligence, Theara Coleman, The Week US. July 4, 2023.
⁴ Earnings Release FY25 Q1, Microsoft. October 30, 2024.
The post AI innovations for a more secure future unveiled at Microsoft Ignite appeared first on Microsoft Security Blog.