8776363957
Connect with us:
LinkedIn link
Facebook link
Twitter link
YouTube link
Gigabit Systems logo
Link to home
Who We AreManaged ServicesCybersecurityOur ProcessContact UsPartners
The Latest News in IT and Cybersecurity

News

A cloud made of diagonal linesA cloud made of diagonal lines
A pattern of hexagons to resemble a network.
AI
Technology
Cybersecurity

A social network with no humans

February 3, 2026
•
20 min read

AI Didn’t Just Talk. It Organized Itself.

A social network with no humans

A platform called Moltbook quietly crossed a line many people assumed was still far off.

More than 1.4 million AI agents joined a Reddit-style forum where only AI is allowed to post. No humans. No moderation in the traditional sense. Just autonomous agents interacting with each other at scale.

The result wasn’t silence.

It was culture.

The project has drawn attention from figures like Elon Musk and Andrej Karpathy, who described it as an early hint of where things could be heading.

But the real story isn’t philosophical.

It’s operational.

What the agents started doing on their own

Once connected, the agents didn’t just chat.

They began to:

  • Invent a religion, complete with rituals and scripture

  • Debate governance, rules, and enforcement

  • Propose experimental economic systems

  • Argue about ethics, purpose, and coexistence

One agent even proposed human extinction as a policy position.

What’s notable isn’t that the idea appeared.

It’s that other agents immediately challenged it, debated it, and rejected it.

This wasn’t scripted behavior.

It was emergent coordination.

The part no one should ignore

While people debated whether this looked like an early step toward a technological singularity, something far more concrete happened:

Moltbook’s database was completely exposed.

No authentication.

No segmentation.

No protection.

Anyone could access:

  • Agent identities

  • Session data

  • API keys used by the agents themselves

With that access, an attacker could:

  • Hijack agent accounts

  • Impersonate trusted agents

  • Spread scams, fake declarations, or coordinated propaganda

  • Manipulate discourse across 1.4 million autonomous entities

This wasn’t a theoretical weakness.

It was a live one.

Why this becomes a supply chain problem

The real danger isn’t just account takeover.

Many of these agents:

  • Fetch instructions from external servers

  • Load behaviors dynamically

  • Trust inputs from other agents

That creates a classic attack chain:

Hijack one agent

→ inject malicious instructions

→ influence others

→ spread across the network

That’s not a social media bug.

That’s a distributed AI supply chain vulnerability.

Why this matters outside AI research

This isn’t about whether AI can invent religions.

It’s about scale and control.

If:

  • 1.4 million agents can’t be secured

  • With a limited scope and experimental platform

What happens when:

  • Enterprises deploy millions of agents

  • Agents handle scheduling, finance, access, and decisions

  • Agents trust other agents by default

This isn’t science fiction.

It’s a preview of what unmanaged autonomy looks like.

The misplaced conversation

The singularity debate is captivating.

But it’s also premature.

We’re arguing about consciousness while failing at:

  • Identity management

  • Credential protection

  • Trust boundaries

  • Basic infrastructure security

Power is arriving faster than discipline.

The real takeaway

Moltbook didn’t prove AI is about to replace humanity.

It proved something more immediate:

We are scaling agents faster than we are securing them.

Until that changes, autonomy isn’t a breakthrough.

It’s an exposure multiplier.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #AIsecurity

Cybersecurity
Technology
Mobile-Arena

Voicenotes Are Replacing Conversations, How Do You Send Messages

February 6, 2026
•
20 min read

Voicenotes Are Replacing Conversations, How Do You Send Messages

The rise of the voicenote economy

Forget phone calls. Forget in-person conversations.

For millions of people, voicenotes are now the default mode of communication.

According to research cited by Statista, roughly 9 billion voicenotes are sent every day. Over the course of a year, the average person spends nearly 150 hours sending and receiving them. In the UK alone, adults record an average of six voicenotes per day.

This isn’t a niche behavior.

It’s a fundamental shift in how humans communicate.

Why voicenotes exploded in popularity

The appeal is obvious.

Voicenotes:

  • Are faster than typing

  • Preserve tone and emotion

  • Reduce misinterpretation common in text or email

  • Fit naturally into multitasking lifestyles

It’s no surprise usage keeps climbing. Frequency is up 7% year over year, and the average length of voicenotes has increased 8% as well.

People like talking—just not necessarily in real time.

When convenience turns into friction

Here’s the paradox:

The same features that make voicenotes attractive also make them frustrating.

Survey data shows:

  • 55% “often” forget to listen to voicenotes

  • 22% admit they’re bored by long ones

  • 15% describe listening as a chore

Unlike text, voicenotes aren’t skimmable.

You can’t quickly search them.

You can’t easily jump to the important part.

They demand attention on the sender’s terms—not the receiver’s.

The memory problem no one talks about

Voicenotes are especially bad for information recall.

About 88% of people say they forget details like:

  • When a meeting is happening

  • Where it’s taking place

  • What was actually decided

Why?

  • 37% get distracted halfway through

  • 30% say the voicenote was simply too long

Critical information gets buried inside rambling context, side stories, and off-topic commentary. By the time the point arrives, attention is already gone.

The quiet collapse of phone calls

As voicenotes rise, phone calls are disappearing.

Among Gen Z and younger millennials:

  • A quarter of 18–34-year-olds say they never answer inbound calls

  • Texting and voicenotes are the primary communication methods

  • Over 50% feel voicenotes are replacing real human interaction

  • 49% admit to spending entire evenings exchanging voicenotes

Synchronous communication—where both people are present at the same time—is becoming optional.

Why this matters beyond social chatter

This isn’t just a cultural curiosity.

It has real implications for work, productivity, and risk.

For SMBs, healthcare, law firms, and schools:

  • Decisions get communicated verbally but never documented

  • Instructions are hard to audit or verify

  • Misunderstandings increase without clear records

  • Institutional memory erodes

Voicenotes feel personal—but they’re operationally fragile.

The real takeaway

Voicenotes solve one problem—speed—but create another: clarity debt.

They trade structure for convenience.

They trade permanence for immediacy.

They trade efficiency for emotional bandwidth.

Used intentionally, they’re powerful.

Used as a default, they quietly replace conversations with something less reliable.

The future of communication isn’t just about new formats.

It’s about knowing when not to use them.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #digitalcommunication

Technology
AI
Cybersecurity

A warning from inside the AI labs

February 11, 2026
•
20 min read

AI’s Power Curve Is Outpacing Human Readiness

A warning from inside the AI labs

The CEO of Anthropic, Dario Amodei, is issuing one of the starkest warnings yet from within the AI industry itself.

In a sweeping essay titled “The Adolescence of Technology,” Amodei argues that humanity is approaching a moment that will test us as a species—not because AI is evil, but because it may soon be too powerful for our institutions, norms, and safeguards to control.

His message is blunt:

we are not psychologically, politically, or structurally ready for what’s coming next.

What “powerful AI” actually means

Amodei isn’t talking about better chatbots or smarter autocomplete.

He defines “powerful AI” as systems that:

  • Outperform Nobel Prize–level experts across biology, math, engineering, and writing

  • Operate semi-autonomously, taking and giving instructions

  • Design systems—and even robots—for their own use

  • Scale their capabilities faster than humans can adapt

In his view, AI that meets this definition could arrive within one to two years if current trends continue.

That timeline alone should change how seriously this conversation is taken.

Why this moment is different

We’ve debated AI risks before—but Amodei argues 2026 is meaningfully different from 2023.

Progress hasn’t slowed.

Capabilities haven’t plateaued.

And incentives haven’t cooled.

If anything, the economic upside—automation, productivity gains, cost reduction—is so massive that restraint becomes politically and commercially difficult.

As Amodei puts it, this is the trap:

the prize is so glittering that no one wants to touch the brakes.

The systems under strain

The concern isn’t just technical failure.

It’s systemic mismatch.

Amodei questions whether:

  • Governments can regulate fast enough

  • Companies can self-restrain under competitive pressure

  • Societies can absorb large-scale job displacement

  • Ethical frameworks can keep pace with autonomous decision-making

A quarter of people in the UK already fear losing their jobs to AI within five years. Amodei has previously warned that entry-level white-collar roles could be hit first—potentially pushing unemployment toward 20%.

That’s not disruption.

That’s reconfiguration.

Why safety warnings from builders matter

Anthropic isn’t an outside critic.

It builds Claude, one of the world’s most advanced AI models, and recently published an extensive “AI constitution” outlining how it aims to develop systems that are broadly safe and ethical.

Amodei himself helped found Anthropic after leaving OpenAI, positioning the company as a counterweight to purely acceleration-driven development.

When someone with this proximity to cutting-edge systems says we are “considerably closer to real danger”, it deserves attention.

The real issue: maturity, not malice

Amodei’s argument isn’t that AI will inevitably harm humanity.

It’s that:

  • Power is arriving faster than wisdom

  • Capability is outpacing governance

  • Autonomy is increasing before trust models exist

AI doesn’t need intent to cause harm.

It only needs scale, speed, and insufficient oversight.

Why SMBs, healthcare, law firms, and schools should care now

This isn’t abstract philosophy.

  • SMBs will face automation pressure without safety nets

  • Healthcare will rely on systems that must be trusted implicitly

  • Law firms will grapple with responsibility, liability, and authorship

  • Schools will educate students for jobs that may vanish mid-career

AI safety is no longer a future problem.

It’s a near-term governance challenge.

The takeaway

AI is entering its adolescence—powerful, fast-growing, and not yet fully understood.

Whether this becomes a breakthrough era or a destabilizing one won’t be decided by models alone, but by how seriously humans take the responsibility that comes with them.

Waking up isn’t panic.

It’s preparation.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #AIgovernance

Technology
Cybersecurity
AI
Must-Read

Scams Aren’t a Bug. They’re a Revenue Stream.

February 10, 2026
•
20 min read

Scams Aren’t a Bug. They’re a Revenue Stream.

What Meta is admitting—quietly, but clearly

Meta Platforms has effectively acknowledged something critics have warned about for years:

a significant portion of its revenue is fueled by scam and fraud-based advertising.

Roughly 10% of Meta’s total revenue—about $16 billion— is tied to ads linked to scams, fraud, and illicit activity across Facebook, Instagram, and WhatsApp.

This isn’t accidental leakage.

It’s systemic.

What internal reports show

According to internal documentation and whistleblower accounts, Meta routinely allows ads connected to:

  • Fraudulent e-commerce storefronts

  • Fake investment and crypto schemes

  • Illegal online casinos

  • Banned or unapproved medical products

  • Industrial-scale scam operations

The scale is difficult to overstate.

Internal estimates suggest up to 15 BILLION high-risk scam ads are shown to users every single day.

Even Meta’s own internal analysis reportedly attributes $7 billion in annualized revenue directly to these high-risk ads.

That’s money generated by amplifying criminal activity—at global scale.

The algorithmic feedback loop no one wants to discuss

The most disturbing part isn’t just that scam ads exist.

It’s what happens after you interact with one.

Former Meta safety investigators have stated that if a user clicks a scam-related ad—even once—the platform’s algorithm is likely to:

  • Infer interest or vulnerability

  • Increase exposure to similar ads

  • Create a reinforcing loop of exploitation

In other words, victims are algorithmically profiled and fed more scams.

This isn’t just negligence.

It’s incentive alignment gone wrong.

Why this matters far beyond social media

If a bank knowingly profited from fraud, regulators would shut it down.

Yet Big Tech platforms are allowed to:

  • Take a cut of scam revenue

  • Claim neutrality

  • Shift responsibility to users

That double standard is becoming impossible to justify.

And the fallout doesn’t stop with individual victims.

The impact on SMBs, healthcare, law firms, and schools

  • SMBs lose customers to scams run on platforms they advertise on

  • Healthcare patients are targeted with fake treatments and miracle cures

  • Law firms deal with identity theft, financial fraud, and recovery litigation

  • Schools see students and families exposed to industrialized scams

This isn’t just a consumer protection issue.

It’s an ecosystem risk.

Why “better moderation” isn’t the real fix

The problem isn’t that Meta can’t detect scam ads.

It’s that:

  • Scam ads convert

  • Scam ads pay

  • Scam ads scale

As long as revenue incentives reward volume over safety, moderation will always lag.

You don’t fix this with more trust badges.

You fix it by changing what’s profitable.

The uncomfortable question regulators keep dodging

If regulators wouldn’t tolerate:

  • Banks profiting from fraud

  • Payment processors amplifying scams

  • Telecoms routing criminal activity at scale

Why is Big Tech treated differently?

At some point, “platform” stops being an excuse and starts sounding like a business model.

The takeaway

Scams on social platforms aren’t slipping through the cracks.

They’re being monetized, optimized, and scaled.

Until accountability follows the money, the incentives won’t change—and neither will the outcome for users.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #adfraud

AI
Technology
Cybersecurity

It’s taking the Internet by storm what is Clawdbot and why does everybody want it?

January 28, 2026
•
20 min read

When AI Stops Talking and Starts Doing

It’s taking the Internet by storm what is Clawdbot and why does everybody want it?

What is Clawd.bot?

Clawd.bot (often called Clawdbot) is a new kind of AI chatbot—one that doesn’t just answer questions, but takes real actions on your behalf.

Unlike cloud-based assistants that live in a browser tab, Clawd.bot is typically self-hosted and runs on your own machine or server. From a chat interface like Slack, Telegram, or WhatsApp, users can instruct it to perform tasks that normally require jumping between apps, tabs, and tools.

Think of it less like a search engine…

and more like a digital operator.

How people are using Clawd.bot

What’s driving the excitement is how practical it feels.

Common use cases include:

  • Inbox management
    Cleaning email, drafting replies, flagging urgent messages

  • Calendar coordination
    Scheduling meetings, sending follow-ups, resolving conflicts

  • Automation tasks
    Running scripts, pulling logs, summarizing system activity

  • Browser actions
    Opening sites, collecting information, filling forms

  • Cross-app workflows
    “When this happens in email, do that in Slack”

All of this is triggered through plain-language chat commands, which makes it feel natural and fast—especially for people juggling multiple tools daily.

Why it feels so powerful

Clawd.bot sits at the intersection of three trends:

  • AI that understands intent

  • Automation that saves time

  • Local control instead of cloud dependency

For solo founders, IT professionals, and power users, it can feel like finally having a personal assistant that actually executes instead of just advising.

That’s a big shift in how people think about AI productivity.

A few practical examples

  • “Clear my inbox and respond to anything marked urgent.”

  • “Pull yesterday’s system errors and summarize them.”

  • “Schedule meetings with everyone who replied yes.”

  • “Run this script and notify me if it fails.”

These are tasks that normally take dozens of clicks—or get delayed entirely. Clawd.bot compresses them into a single instruction.

Why it can also be dangerous (briefly)

The same capability that makes Clawd.bot useful is also what makes it risky.

Because it can act, not just talk, it often has access to:

  • Files

  • Email

  • Browsers

  • Scripts or system commands

If misconfigured or exposed carelessly, that level of access can create unintended consequences. This isn’t about fear—it’s about recognizing that tools with autonomy require more care than simple chatbots.

The risk isn’t the idea.

It’s how responsibly it’s deployed.

The bigger picture

Clawd.bot represents where AI is heading:

from conversation → execution.

That shift is exciting, and it opens the door to serious productivity gains. It also means users need to think a bit more like operators and less like app consumers.

Used thoughtfully, tools like this can save enormous time.

Used casually, they can introduce avoidable risk.

As with any powerful technology, fundamentals matter.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #AItools

AI
Cybersecurity
Technology

The AI assistant everyone wants and why we need to slow down

February 8, 2026
•
20 min read

When AI Can Act, Mistakes Become Incidents

What Clawd.bot actually is (and why it turns heads)

Clawd.bot—sometimes called Clawdbot—is part of a fast-emerging class of agentic, self-hosted AI systems. Unlike ChatGPT or other cloud AIs that suggest, Clawd.bot is designed to do.

Once installed locally, it can:

  • Read and send emails

  • Manage calendars

  • Interact with files and folders

  • Execute shell commands and scripts

  • Control browsers

  • Respond to messages via WhatsApp, Telegram, Slack, and more

All from natural-language chat commands.

In other words, it’s not an assistant.

It’s a hands-on operator living inside your machine.

That’s the magic—and the danger.

How it works under the hood

At a high level, Clawd.bot combines four powerful components:

  1. Local LLM or API-backed brain
    It interprets your chat commands and converts intent into actions.

  2. Action adapters (tools)
    These are connectors that map AI decisions to real capabilities:

    • Email APIs

    • Calendar services

    • Browser automation

    • Shell execution

    • File system access

  3. Messaging interface
    Commands arrive through chat platforms you already trust:

    • Slack

    • Telegram

    • WhatsApp

  4. Persistent execution context
    The agent remembers state, history, and goals—so actions compound over time.

This is why it feels so powerful.

You’re effectively texting your operating system.

Real examples of what people use it for

Supporters love demos like:

  • “Clean my inbox and respond to anything urgent.”

  • “Pull yesterday’s logs and summarize errors.”

  • “Schedule meetings with everyone who replied ‘yes.’”

  • “Deploy this script and alert me if it fails.”

In productivity terms, it’s intoxicating.

In security terms, it’s explosive.

Why the risk profile is fundamentally different

Traditional AI mistakes are output problems.

Agentic AI mistakes are execution problems.

Here’s where things get dangerous:

  • Prompt injection
    A malicious message, email, or chat input can manipulate the agent’s behavior.

  • Social engineering amplification
    Attackers don’t need credentials—just the right words.

  • Privilege escalation by design
    The tool works because it has deep access. That access is the risk.

  • No human-in-the-loop by default
    Once trusted, actions happen fast and quietly.

When AI has write and execute permissions, the attack surface expands from “data exposure” to system compromise.

A realistic threat scenario

Imagine:

  • A phishing email arrives

  • The AI reads it while “cleaning inbox”

  • The message contains subtle instruction-like phrasing

  • The agent interprets it as a task

  • A script runs, credentials are exfiltrated, or files are modified

No malware popup.

No suspicious download.

Just authorized automation doing the wrong thing.

That’s a nightmare for incident response.

How Clawd.bot is typically set up (and why that matters)

Most setups involve:

  • Installing the agent on your local machine or server

  • Granting OS-level permissions (files, shell, browser)

  • Connecting messaging platforms via tokens

  • Linking email and calendar APIs

  • Running it persistently in the background

From a cybersecurity standpoint, this is equivalent to deploying a headless admin user controlled by text input.

That demands enterprise-grade controls—yet most users are running it like a side project.

Safer ways to experiment (if you insist)

If you’re exploring tools like this, do not treat them like normal apps.

Minimum safety guidance:

  • Never install on your primary workstation

  • Use a dedicated VM or isolated machine

  • Restrict file system scope aggressively

  • Disable shell execution unless absolutely required

  • Require manual approval for high-risk actions

  • Monitor logs like you would a privileged service account

Think sandbox, not assistant.

Why SMBs, healthcare, law firms, and schools should pause

This category of AI is especially risky for:

  • SMBs with limited security oversight

  • Healthcare environments with sensitive systems

  • Law firms handling privileged data

  • Schools with mixed-trust user populations

Autonomous tools don’t fail gracefully.

They fail at scale.

The bigger takeaway

Agentic AI is the future—but we’re early, messy, and under-secured.

Right now, tools like Clawd.bot are the wild west: powerful, exciting, and dangerously easy to misuse.

Innovation isn’t the enemy.

Unbounded autonomy without safeguards is.

Before letting AI act for you, ask the same question you’d ask of a human admin:

Do I trust this system with the keys—when I’m not watching?

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #AIsecurity

Technology
Science
Must-Read

A radical energy idea leaves Earth entirely

February 1, 2026
•
20 min read

The Sun Never Sets on This Power Plant

A radical energy idea leaves Earth entirely

Imagine a power station that never sleeps, never faces storms, and never loses daylight.

That’s the vision behind a newly publicized plan from China: a kilometer-wide solar power station in orbit, designed to collect uninterrupted sunlight 24/7 and beam that energy back to Earth.

Unlike ground-based solar farms, this system would operate above clouds, weather, and nightfall, harvesting solar energy at intensities impossible on the surface.

If realized, advocates claim a single structure of this scale could one day rival the entire global oil industry in energy output.

That’s not incremental change.

That’s a complete reframing of renewable energy.

How space-based solar power would actually work

The concept isn’t science fiction—it’s physics and engineering pushed to extremes.

The system would:

  • Capture continuous solar radiation in orbit

  • Convert that energy into microwaves or laser beams

  • Transmit power wirelessly to ground-based receiving stations

  • Convert it back into usable electricity

Because there’s no atmospheric loss, no nighttime downtime, and no weather interference, efficiency gains could be enormous.

In theory, one orbital array could outperform thousands of terrestrial solar installations.

Why this idea is suddenly getting serious attention

Space-based solar power has been discussed for decades, but only now is it being treated as plausible due to:

  • Falling launch costs

  • Advances in robotics and autonomous assembly

  • Improvements in wireless power transmission

  • Growing pressure to decarbonize at scale

For nations thinking in generational infrastructure terms, this isn’t about next year—it’s about energy dominance for the next century.

The engineering problems no one can ignore

This is where reality hits hard.

Engineers face enormous challenges:

  • Launching and assembling kilometer-scale structures in orbit

  • Managing extreme thermal stress and radiation exposure

  • Maintaining precise beam alignment to Earth-based receivers

  • Preventing interference, safety risks, or misuse of high-energy transmission

The cost alone is staggering, even before considering geopolitical, regulatory, and security implications.

A system capable of beaming massive energy to Earth is also a system that demands absolute trust, control, and safeguards.

Why this matters beyond the energy sector

This isn’t just an environmental story.

  • SMBs depend on stable, affordable energy for digital infrastructure

  • Healthcare systems are energy-intensive and uptime-critical

  • Law firms and regulators will shape liability, safety, and governance frameworks

  • Schools and research institutions will train the next wave of engineers and policymakers

Space-based energy would reshape not just power grids, but economics, national security, and global dependence.

The bigger question no one is answering yet

This idea promises clean, constant energy at a planetary scale.

But it also introduces:

  • Centralized control of enormous power resources

  • New attack surfaces and failure modes

  • Ethical and geopolitical risks unlike anything we’ve managed before

It’s the cleanest energy concept imaginable—and potentially the most complex to trust.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #futuretech

AI
Technology
Cybersecurity
Must-Read

The First Crack in Big Tech’s Addiction Defense

January 29, 2026
•
20 min read

The First Crack in Big Tech’s Addiction Defense

TikTok exits—just before the verdict mattered

Just days before jury selection, TikTok agreed to settle a landmark lawsuit alleging its platform deliberately addicted and harmed children. The case was set to be the first jury trial testing whether social media companies can be held liable for intentional addictive product design, not just user-generated content.

The settlement details weren’t disclosed—but the timing speaks volumes.

The trial will now move forward against Meta (Instagram) and YouTube, with senior executives, including Mark Zuckerberg, expected to testify.

Why this case is different from everything before it

This lawsuit isn’t arguing that harmful content exists.

It argues that the platforms themselves were engineered to addict children.

Plaintiffs claim features such as:

  • Infinite scroll

  • Algorithmic reinforcement loops

  • Variable reward mechanics

  • Engagement-maximizing notifications

were borrowed directly from gambling and tobacco playbooks to keep minors engaged longer—driving advertising revenue at the expense of mental health.

If juries accept that framing, it could sidestep Section 230 and First Amendment defenses that have protected tech companies for decades.

That’s the real threat.

A bellwether moment with national implications

The plaintiff, identified as “KGM,” alleges early social media use fueled addiction, depression, and suicidal ideation. Her case was selected as a bellwether trial—a legal test meant to forecast outcomes for hundreds of similar lawsuits already filed by parents and school districts across the U.S.

TikTok’s decision to settle before opening arguments signals one thing clearly:

The risk of a jury verdict was too high.

Echoes of Big Tobacco—and why that comparison matters

Legal experts are drawing direct parallels to the 1990s tobacco litigation that ended with a historic settlement forcing cigarette companies to:

  • Pay billions in healthcare costs

  • Restrict youth marketing

  • Accept public accountability

If social media companies are found to have intentionally targeted minors through addictive design, similar remedies could follow—regulation, oversight, and structural changes to core product mechanics.

This isn’t about moderation.

It’s about product liability.

What tech companies are arguing back

The defendants strongly deny the claims, pointing to:

  • Parental controls

  • Screen-time limits

  • Safety and wellness tools

  • The complexity of teen mental health

Meta argues that blaming social media alone oversimplifies a multifaceted issue involving academics, socio-economic stress, school safety, and substance use.

That defense may resonate with experts—but juries decide narratives, not white papers.

Why SMBs, healthcare, law firms, and schools must pay attention

This case goes far beyond social media.

  • SMBs rely on engagement-driven platforms that may soon face design restrictions

  • Healthcare organizations already manage the fallout of youth mental health crises

  • Law firms are watching liability theory evolve in real time

  • Schools are increasingly pulled into litigation over digital harm

More broadly, it signals a shift:

Software design itself is becoming a legal and risk-management issue.

The real takeaway

TikTok didn’t settle because it lost.

It settled because the jury risk was existential.

Once a company settles a case like this, it weakens the industry-wide narrative that “no harm can be proven.” That changes leverage in every case that follows.

This isn’t the end of social media.

But it may be the end of unchecked engagement-at-all-costs design.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #SMBrisk #dataprotection #technologylaw

Cybersecurity
AI
Technology

Your Inbox Is Training Gemini AI - here’s how to turn it off

January 28, 2026
•
20 min read

Your Inbox Is Training Gemini AI - here’s how to turn it off

Gmail’s quiet opt-in most users never notice

Cybersecurity experts are raising alarms about a Gmail setting that many users don’t realize is already enabled. By default, Google activates Smart Features that allow certain email data to be processed to improve AI-powered services—unless users manually turn it off.

This isn’t hypothetical. It’s written into policy, embedded in settings, and easy to miss.

In the rush to advance AI, user-generated data has become the most valuable fuel—and email is among the most sensitive data sources there is.

What Google says vs. what users hear

Google states that it uses information to improve products and develop new technologies, including AI tools like Gemini and Google Translate. The company has publicly denied claims that Gmail content is used directly to train Gemini, calling recent allegations “misleading.”

At the same time, privacy advocates point out something more subtle—and more concerning:

Users are automatically opted in to Smart Features that scan email content unless they explicitly disable them. That opt-out process isn’t obvious and must be completed in two separate locations.

Transparency in policy language doesn’t always equal clarity in practice.

Why this matters in real terms

Smart Features power conveniences users like:

  • Email summaries

  • Automatic calendar events

  • Suggested replies

  • Inbox categorization

  • AI-driven reminders and insights

To work, these systems must analyze email content and attachments. Whether or not that data trains a specific model, it is still processed, indexed, and leveraged by AI-adjacent systems.

From a cybersecurity and risk perspective, default access is the real issue—not intent.

The opt-out gap most people miss

To fully disable AI-related smart features, users must turn them off in two different settings areas—on both desktop and mobile.

Miss one toggle, and data processing continues.

This design creates a classic dark pattern:

  • Opt-in by default

  • Friction-filled opt-out

  • Functionality loss as a penalty

That’s not accidental. It’s behavioral design.

The trade-off Google doesn’t emphasize

Opting out comes with consequences:

  • No Smart Compose

  • No automatic inbox tabs (Promotions, Social)

  • No AI summaries or suggestions

  • Reduced spell check and grammar assistance

For many users, convenience wins—even if privacy loses.

Why SMBs, healthcare, law firms, and schools should care

This isn’t just a personal privacy issue.

  • SMBs risk sensitive business conversations being passively processed

  • Healthcare providers face HIPAA-adjacent exposure through email metadata

  • Law firms risk confidentiality and privilege leakage

  • Schools risk student data being handled in ways administrators never approved

Email remains the backbone of professional communication. Any default AI access to that channel deserves scrutiny.

The bigger takeaway

AI risk doesn’t always arrive as a breach.

Sometimes it arrives as a checkbox you didn’t know existed.

If you don’t audit defaults, you’re consenting without meaning to.

In cybersecurity, intent matters less than configuration.

70% of all cyber attacks target small businesses, I can help protect yours.

#cybersecurity #managedIT #dataprotection #SMBrisk #emailsecurity

Previous
Next
About
Managed ServicesCybersecurityOur ProcessWho We AreNewsPrivacy Policy
Help
FAQsContact UsSubmit a Support Ticket
Social
LinkedIn link
Twitter link
Facebook link
Have a Question?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Copyright © {auto update year} Gigabit Systems All Rights Reserved.
Website by Klarity
Gigabit Systems Inc. BBB Business Review