AI Use and the Sense of Discomfort: Copyright and Literacy Issues from Home and Workplace

Creation

This article has already been published in Japanese.
Here, I present a new English version, hoping it will quietly reach readers abroad.

日本語版はこちら

⚖️ The Spread of AI and a Sense of Discomfort

In recent months, I have felt that AI has become much closer to everyday life.

Previously, I experimented with “Stable Diffusion” by downloading it to my PC, or running image generation on Google Colab with borrowed GPUs. Back then, AI felt more like “a playground for tech enthusiasts.”

Now, however, AI has quietly entered both home and workplace, becoming something anyone can use casually.

While I appreciate its convenience, I also feel a sense of discomfort and concern.

🏡 AI in the Home: Lightness and Discomfort

One symbolic moment was when my wife suggested: “Let’s make LINE stickers with AI!”

We laughed about creating something cute—maybe even something that could sell—and she added, “We can even ask AI for the theme!”

At that moment, I realized: AI is no longer special. It has become part of family conversation, part of daily life.

Yet, I also felt discomfort. How far should we delegate “the human role of thinking” to AI?
Even in playful family settings, this touches on issues of copyright and creative ownership.

💼 AI in the Workplace: Efficiency and Shadows

At work, AI adoption is also accelerating.

It helps organize meeting minutes, drafts content quickly, and even suggests elements for reply emails.
Rather than creating from zero, AI acts as a “supporter,” refining the points humans provide.

The efficiency gains are undeniable—meeting minutes take less than half the time, and email replies flow more smoothly.

But again, discomfort arises.
AI-generated text cannot be sent externally without human review.

⚠️ Hallucinations and Misinformation

AI has a phenomenon called “hallucination”—producing information that sounds plausible but is factually incorrect.

In business, using such misinformation directly risks damaging trust.
That is why human verification and correction are essential.

Without understanding that AI must be used “with the premise of mistakes,” adopting it blindly in work or business is dangerous.

⚠️ Copyright and Information Literacy

The greatest concern in AI use lies in copyright and literacy.

When AI-generated text or images are used commercially, copyright handling and source verification are indispensable.
Yet, some people mistakenly believe: “Because AI made it, it’s free to use.”

This is extremely risky.
Ignoring copyright risks can lead to loss of trust for both companies and individuals.

Moreover, AI-generated information is not always accurate.
Without literacy—without awareness of its limitations—AI’s convenience can quickly turn into risk.

🔄 From Special Technology to Daily Tool

Looking back, when I ran Stable Diffusion on my PC, it was still a “pioneer’s experience.”
Generation took time, and AI was rarely mentioned on TV.

Now, AI naturally appears in family conversations and workplace discussions.
It has shifted from “special technology” to “daily tool.”

But as society embraces AI, copyright and literacy issues have only grown more serious.

✅ Conclusion: Balancing Convenience and Responsibility

AI is no longer special—it has entered daily life and work.
It supports play, efficiency, and responsibility.

Yet, without respect for copyright and literacy, convenience turns into risk.
How we use AI will shape the trust of both companies and individuals.

Rather than being swept away by convenience, we must face discomfort and concerns directly, and consider responsible use.

👉 How do you perceive misinformation, hallucinations, and copyright issues when using AI?