The Charm, Such As It Is, of the Charmera
Jan. 16th, 2026 10:23 pm
Kodak did a brisk business over the holidays with their meme camera, the Charmera, which is tiny enough to fit on a key chain and takes deeply lofi photos, especially in low light. But it cost $30 and as it happens I do need a keychain, so I thought I would try one out and see what I thought.
Inasmuch as every camera must be inaugurated with a picture of a cat, here is the very first photo out of the camera:

And here is a picture of me, with said camera, in my bathroom mirror.

These pictures are pretty terrible! But admittedly they are also inside my house where the lighting is not great. What happens when we go outside?

Nope, still pretty terrible.
Which is to be expected, as this thing comes with a 1.6 megapixel sensor (1440×1080), and the sensor itself is likely the size of a pinhead. You’re not taking pictures with this camera for high fidelity. You’re taking them for glitchy lo-res fun, in as good of lighting as you can get. This also had video, at the same resolution, but you know what, I’m not even going to bother.

In addition to the primary color mode the Charmera has other “fun” modes including ones that add frame and goofy pixel art to your picture, which, you know, okay, why not. You need to bring along your own micro memory card, and it’s a real pain in the ass to get it in, so you will probably never take it out (you can connect it to your computer via USB, which is also how it’s charged), but once it’s in you can take effectively infinite number of pictures because the individual image files are so small.
The UI is not great, the little screen on the back of the camera is too tiny to be of much use, and quite honestly I’m not sure what the use case of this thing is, other than to have it, and possibly give it to an 8-year-old so they can run around taking pictures without running the risk of them damaging anything valuable, like your phone or a real camera.
But, I mean, as long as you know all that going in, yeah, it’s kind of fun. And for $30(ish) bucks, not a huge outlay for trendily pixellated photos. I’ve made worse purchases recently.
— JS
Do photos confirm Elon Musk and Sydney Sweeney are dating?
Jan. 16th, 2026 10:35 pmIs Enrique Tarrio an ICE agent? Checking claim about DHS leak
Jan. 16th, 2026 10:18 pmDoes 'Nazis.us' redirect to DHS website? Yes, here's who bought the domain
Jan. 16th, 2026 06:44 pmDid Jennifer Lawrence buy gas station to reward owner who was generous to her years earlier?
Jan. 16th, 2026 06:00 pmPosts claim ICE agent slipped on actual ice. They're almost right
Jan. 16th, 2026 05:26 pmSaturday Morning Breakfast Cereal - Gender
Jan. 16th, 2026 11:20 am
Click here to go see the bonus panel!
Hovertext:
Furry is not a gender, it is a biological sex.
Today's News:
AI and the Corporate Capture of Knowledge
Jan. 16th, 2026 02:44 pmMore than a decade after Aaron Swartz’s death, the United States is still living inside the contradiction that destroyed him.
Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible. Acting on that, he downloaded thousands of academic articles from the JSTOR archive with the intention of making them publicly available. For this, the federal government charged him with a felony and threatened decades in prison. After two years of prosecutorial pressure, Swartz died by suicide on Jan. 11, 2013.
The still-unresolved questions raised by his case have resurfaced in today’s debates over artificial intelligence, copyright and the ultimate control of knowledge.
At the time of Swartz’s prosecution, vast amounts of research were funded by taxpayers, conducted at public institutions and intended to advance public understanding. But access to that research was, and still is, locked behind expensive paywalls. People are unable to read work they helped fund without paying private journals and research websites.
Swartz considered this hoarding of knowledge to be neither accidental nor inevitable. It was the result of legal, economic and political choices. His actions challenged those choices directly. And for that, the government treated him as a criminal.
Today’s AI arms race involves a far more expansive, profit-driven form of information appropriation. The tech giants ingest vast amounts of copyrighted material: books, journalism, academic papers, art, music and personal writing. This data is scraped at industrial scale, often without consent, compensation or transparency, and then used to train large AI models.
AI companies then sell their proprietary systems, built on public and private knowledge, back to the people who funded it. But this time, the government’s response has been markedly different. There are no criminal prosecutions, no threats of decades-long prison sentences. Lawsuits proceed slowly, enforcement remains uncertain and policymakers signal caution, given AI’s perceived economic and strategic importance. Copyright infringement is reframed as an unfortunate but necessary step toward “innovation.”
Recent developments underscore this imbalance. In 2025, Anthropic reached a settlement with publishers over allegations that its AI systems were trained on copyrighted books without authorization. The agreement reportedly valued infringement at roughly $3,000 per book across an estimated 500,000 works, coming at a cost of over $1.5 billion. Plagiarism disputes between artists and accused infringers routinely settle for hundreds of thousands, or even millions, of dollars when prominent works are involved. Scholars estimate Anthropic avoided over $1 trillion in liability costs. For well-capitalized AI firms, such settlements are likely being factored as a predictable cost of doing business.
As AI becomes a larger part of America’s economy, one can see the writing on the wall. Judges will twist themselves into knots to justify an innovative technology premised on literally stealing the works of artists, poets, musicians, all of academia and the internet, and vast expanses of literature. But if Swartz’s actions were criminal, it is worth asking: What standard are we now applying to AI companies?
The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.
The stakes extend beyond copyright law or past injustices. They concern who controls the infrastructure of knowledge going forward and what that control means for democratic participation, accountability and public trust.
Systems trained on vast bodies of publicly funded research are increasingly becoming the primary way people learn about science, law, medicine and public policy. As search, synthesis and explanation are mediated through AI models, control over training data and infrastructure translates into control over what questions can be asked, what answers are surfaced, and whose expertise is treated as authoritative. If public knowledge is absorbed into proprietary systems that the public cannot inspect, audit or meaningfully challenge, then access to information is no longer governed by democratic norms but by corporate priorities.
Like the early internet, AI is often described as a democratizing force. But also like the internet, AI’s current trajectory suggests something closer to consolidation. Control over data, models and computational infrastructure is concentrated in the hands of a small number of powerful tech companies. They will decide who gets access to knowledge, under what conditions and at what price.
Swartz’s fight was not simply about access, but about whether knowledge should be governed by openness or corporate capture, and who that knowledge is ultimately for. He understood that access to knowledge is a prerequisite for democracy. A society cannot meaningfully debate policy, science or justice if information is locked away behind paywalls or controlled by proprietary algorithms. If we allow AI companies to profit from mass appropriation while claiming immunity, we are choosing a future in which access to knowledge is governed by corporate power rather than democratic values.
How we treat knowledge—who may access it, who may profit from it and who is punished for sharing it—has become a test of our democratic commitments. We should be honest about what those choices say about us.
This essay was written with J. B. Branch, and originally appeared in the San Francisco Chronicle.
What to know about images claiming to show man named Juan Carlos after ICE assault
Jan. 16th, 2026 02:00 pmLBCF: Antiheroes
Jan. 16th, 2026 12:00 pmRumor UN called for independent investigation into Renee Good's death overstates the truth
Jan. 16th, 2026 12:00 pmWas ICE training shortened to 47 days to honor Trump? What we know
Jan. 16th, 2026 11:00 amGirl Genius for Friday, January 16, 2026
Jan. 16th, 2026 05:00 amThe Slow Passage Of Time
Jan. 15th, 2026 09:49 pm
I think Hannelore is about 28 in comic time now but I also reserve the right to change this if/when it becomes necessary or I feel like it
