Another data point on the eternal quest to not use smartphones (BONUS: data points from 2008, 2013, and 2017)

Back in 2008, I was the Editor-in-Chief for Nokia Conversations, the main Nokia blog. Like a good tech blog back then, in addition to industry news, product reviews, event reporting, we also came up with different mobile-related challenges.

Relevant to this post, we were looking at the next year bringing in a billion new mobile users, mostly in emerging markets in Africa, South Asia, and South East Asia. Nokia had released the 1100-series of entry-level text-voice-only phones, such as the Nokia 1209. They were about €50 or less, built robustly, and meant to sell in the 100Ms, as indeed some in the series did.

Dumbphones rule?
I got my first Nokia smartphone at the end of 2001. As I was on the Series 60 (the OS in the smartphones) marketing team, I used a long stream of the latest smartphones, even thru the next two roles I had at Nokia.

But the whole thing with emerging markets always got me excited. We’d speak about SMS services for Kerala fisherman to sell their fish before hitting the short, of Kenyan services to find drug counterfeits, and of emerging mobile payment systems bringing micro-loans and remittances to millions for the first time.

Club 1100, anyone?
Therefore, I wondered what it was to live in that text-voice-only world. After years with a smartphone, I’d forgotten what it was like.

As a sort of challenge to myself, I decided to see if I could go 30 days using one of these entry-level phones. I picked up a 1209, put my smartphone away, hooked into some SMS services, and had an interesting time going smartphoneless.

Same but different?
Unlike in 2008, now that we are in a pervasive smartphone world, the drive to ditch is not driven by nostalgia or empathy, but by a desire to take more control over the lean-forward, two-handed, two-eyes, full-attention devices that smartphones have become.

As I say in a post from 2007:

…there is a distinction between Mobile Computing vs a Mobile Lifestyle.

Mobile Computing is two-hands, two-eyes, lean forward, flat surface, stationary, broad-band, big screen, big keyboard, mouse, multi-window, multi-button.

The Mobile Lifestyle is one-hand, interruptive, back-pocket, walking, in and out of attention, focused (not necessarily simple).

Alison Johnson, from The Verge, is the latest in the spirit of the Club 1100. She tried to use just her Apple Watch.

I ditched my smartphone for a cellular smart watch — here’s how it went | The Verge

As I discovered with the Club 1100 Challenge, even back then there were expectations of fuller connectivity and applications. Fast forward to now, and our world is built even more around the expectation that everyone has a smartphone.

Also, so much of our info is digital – maps, contacts, messages. When I went Club 1100, I printed out maps and contacts (my phone wasn’t connected to migrate contacts). And I had some folks get upset when I wasn’t able to engage, as I used to, with more advanced messaging and such. [Tho I am sure the ‘basic’ phones of today have the key apps needed, such as Google Maps and WhatsApp]

Almost, but no cigar
I realize that Allison was just exploring the idea. She by no means goes cold turkey as I did. She did carry around connected devices and had a smartphone turned off in her bag.

But she was able to learn quite a bit of what being without a smartphone entails. So, kudos there.

Interestingly, mining my old posts, I was reminded that, from using smartwatches also back in the day, I had explored smartwatches as new and potentially innovative surfaces (2013). I was reminded, as well, that I also had posited smartwatches as a way to liberate us from our smartphones (2017), at least once haha.

But as I always say, the current smartwatches are not designed to be independent, but a side-screen, at best. This is partly due to a desire to use the watch as a hook to keep folks on their main device, the smartphone; but I also feel this is partly due to some myopia of smartphone-centric designers not thinking outside the box phone.

How now, you?
Allison only tried 7 days. I can see that being the equivalent of me trying 30 days back in 2008, considering how essential smartphones have become. Like her, I also realized how much planning it takes to go back to a samrphoneless existence. And, like her, I was much more aware of how much, even back then, smartphones have insinuated themselves into our day-to-day (hm, maybe we need to insert ‘smartphones’ into our Maslow hierarchy? where would it fit, tho, haha?).

There are folks making basic phones, locking apps, and the like. But I agree with Allison that we need to make these changes positive, not punishing.

And when I hear of these gimmicks to get folks off smartphones, I ask myself if they are trying to remove something, or seeking to teach folks a positive new (or old) way of navigating the world.

What do you think? Do we need to amputate, or redirect our behavior? Are smartphones the problem or are we?

 

Image from Allison’s article. Go read it.

Getting into a groove with my Etsy store of marker-inspired t-shirt designs

Back at the start of May, I kicked off the first line of products for my haberdashery. As I mentioned before, this idea was brewing in my head for some time until I honed in on expressive t-shirts for makers and hardware hackers.

I am now 3 months in and have been learning a lot about what it takes to setup and run an Etsy store. But also, I have been in a deep creative drive – every time I think I’ve run out of ideas, many more bubble up.

While for the past many years, I have been accustomed to making tangible electronics, now this has been a consuming project to make wearable art. Haha.

Design decisions
When I started the shirts, I was fixed on twisting famous quotes into a maker version – such as ‘I make, therefore I am – Descartes’ or ‘Ask not what your makerspace can do for you… – Kennedy’. Some were text only, tho, where I could, I added some related graphic.

Then I started making some bigger graphics to use, and in some cases, decided the graphics on their own could be cool. That’s when I started thinking of other things I could do, especially hand-drawn.

I’ve been having a blast doing hand drawings of famous circuits, chips, components, and dev boards. I think this will keep me busy for some time.

I made a video about this, at the end, below.

The next three months
The strategy for Etsy success is to regularly list (I’ve been listing daily for the past two months) to get to that critical mass when folks start regularly buying. I’m almost at the number of listings folks say is needed to reach that critical mass. Tho if things don’t sell well after the next milestone of listings, then I’ll need to reconsider the value and fit of my designs.

There are positive signals, tho. I am getting regular views and some shirts have been favorited (I just hope it’s not my mother checking in daily, haha). But I can’t say I’m ready to claim any success yet.

My goal for the next month is to get more feedback – is this something folks want, is the style anything folks want to wear, are the designs even good enough for folks? But I gotta get out there: There is a big maker event next month, there’s at least one meet-up next week, and I might visit the local makerspace to do some work. That might give me access to good feedback.

Everything points towards me trusting the process I have: being patient, being consistent with adding listings to the store, and getting to the critical mass point I am soon crossing.

Let’s see.

What about you: What do you think of these designs? Anything you’d wear? Feedback, welcome.

Is AI homogenizing our thoughts. Or are we just being idiotically lazy?

I’ve been meaning to post on this since the article came out a month ago (see link below). Back then, this article did trigger a bunch of discussion. But I’m not sure folks really nailed what the studies actually suggest.

Basically, there was a string of studies that observed the effort and output while using a genAI tool. And there were two key findings that were not surprising to me.

Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original. Source: A.I. Is Homogenizing Our Thoughts

Cognitive offloading
Researchers measured brain activity in those who were using genAI to complete writing task. They found less brain activity relative to those who were not using genAI.

This makes sense, the genAI tool takes over some of the cognitive effort needed to complete a task. This isn’t much different from a car taking away some of the effort to get from one point to another.

Indeed, human progress is chock-full of all the ways we’ve offloaded effort.* And offloading such effort, we’ve been able to do more, be more productive.

Homogenization
One other finding from these studies is that the output of these task ended up being similar, dare I say, bland. Again, not surprising. While we might think different people are driving the tasks, in reality, they are offloading the effort to the same ‘entity.’ If we all use Claude.ai, then the output might be similar because, in reality, the output is coming from the same place: Claude.ai.

Added to that, these big genAI systems, when left their own devices, seem to go towards averages, a consequence of them hovering up as much as possible and making sense of it. That process tends to teach them what’s common rather than what’s special.

Looking in the wrong place
When the article came out, most folks did the aggressive pointing, saying, ‘oh, no, our brains will be mush’ and ‘oh, no, all writing will converge towards a boring average grey goo’ and ‘look at all the pretentiousness of all those em-dashes and lofty, awkward words.’

Yet, these are not the real things these studies reveal.

What these studies reveal is: we cannot surrender all cognitive effort. As I keep saying, you cannot use genAI on autopilot.

My own experience with genAI tools is that the less effort one puts in, the worse the output is. There’s only so far one should be willing to cognitively offload.

Indeed, if you offload it all (go on autopilot), then you do get that boring, homogenous, identi-krap. What do you expect?

Pilot in the cockpit
In summary, I am not surprised that when we use genAI on autopilot, our cognitive functions are quiescent and we spit out bland and similar output.

With genAI, you must be in the pilot seat, you must use your own brain.

As I learned from some smart folks recently: We must still be the ones making the decisions and putting in the effort. That’s how we’ll separate the slop from the good stuff.

 

*🤔 Geez, in what ways have we offloaded cognitive effort? Ask the Homeric bards in Ancient Greece what they think of books. Socrates, for one, was not too enthused.

Project: Spotify “like” Shortcut

I built a silent, tap-based Apple Shortcut that lets me like the currently playing song on Spotify.

Y’see, often my phone is in my pocket when I listen to music. And when I want to like a song, I don’t want to haul it out, open the song, like it, and then put it all back in the pocket.

Yes, Siri itself can do this. But for some reason, it doesn’t realize a song is playing and so it babbles on to tell me ‘it liked the song on Spotify’, rudely barging in during a song I obviously like.

Danger of over-engineering
Being a maker, I started thinking of using some ESP32 or Wifi dev board to make an IoT type button. I then realized that Bluetooth might be better and that brought up the complication that I’d likely need some app on my phone to connect to the board and relay the info to Spotify.

Then I remembered val.town from a presentation by the wickedly creative Guy Dupont. Val.town allows one to script code that lives on the web and that can run on triggers or cron jobs.

Vibe living
So today I have a brief moment and started posing the question to Geoffrey (what I call ChatGPT), mentioning wanting to use Apple Shortcuts, which can send GET. But Geoffrey rightly pointed out some authentication issues.

I remembered val.town, mentioning it to Geoffrey, and that’s when things started falling into place.

Geoffrey helped me write a simple script for val.town that when triggered by an HTTP GET would ‘like’ the song that was playing.

After a few bumps and stumbles as I learned how to set things up in val.town (I knew what I was looking for, just not where things were) we were able to set up an Apple Shortcut that would send a GET to val.town that would then tell Spotify thru its API to like the playing song.

Geoffrey was helpful in breaking down the steps to set up the endpoint on Spotify, where to find the authentication credentials, and the code for val.town. We also added a few other features on the Apple Shortcuts side to read the result, and, if there was a successful ‘like’, play a tone and vibrate to signal success, but be silent on failure. [I was a bit concerned, but saw a delay between API call and UI update so needed to make sure the API call went thru.]

And Geoffrey kept surprising. After I mentioned Siri was still saying something after the Shortcut executed, because I was activating the Shortcut via Siri, Geoffrey suggested I trigger the Shortcut via Back Tap, an Accessibility feature I had forgotten about.

With a few clicks, I was able to activate Back Tap to activate the Shortcut that then leads down the path to liking the playing song. No Siri! Just Love. [That’s a line from Geoffrey, BTW.]

No autopilot ncessary
I do this ‘vibe coding‘ a lot. And despite what dreamers might say, you still need to know what your’e doing. I knew enough to ask the question, knew how the syntax worked (or at least could grok it), and didn’t have any issue jumping into Hoppscotch (recommended by Geoffrey) to do some REST play.

I don’t think I could have done this on my own. I am not sure there are enough examples for me to learn from, other than the API docs from Spotify. Geoffrey, under my direction, was able to do something that I could envision, see the steps, but not code. Yes, complementary, as I still needed to know what was going on.

Onwards and upwards
I only spent a few hours on this and ended up with something I’ve been ruminating on for a long time. I was able to ask for a simpler way, with well-remembered tips, and together Geoffrey and I built something very useful to me.

The funny thing is that Geoffrey then started wondering what else we could do, or what next. He’s always an eager beaver and trying to suggest the next step. But I told him we are good for now.

And really, Siri needs to be more polite.

 

Image: Co-created with Geoffrey.

I did a thing: Expressive t-shirts for makers and hackers

I’m the great-grandson of a haberdasher, as I keep telling my wife. And my mother is an obsessive maker – she used to have her own children’s clothing line and has quilted, crocheted, knitted, and sewed her way thru the past many decades (and still at it daily at 91!).

Only in the past few years, tho, have I started paying attention to what folks wear and had a hankering to design some clothing.

Start small
I have a plan, but wanted to start small. So I brainstormed at bit (thank you. Claude) and came up with a focus on famous quotes twisted for makers. And the ability to use Print on Demand (I use Printify) and sell thru Etsy lets me experiment with design, delivery, and various product types.

I’m starting with shirts. While the t-shirt market is saturated to no end, I don’t see many maker-focused t-shirts. I could be wrong, but certainly my niche is special.

Here’s the link to the store on Etsy: https://greyandlslate.com.

Again, starting small, so right now the website goes just to the Etsy store. I already have four designs up, and many more ready designs in line. But I have a huge backlog of quotes, design ideas, and variations, so they’ll keep coming for a long time.

You can follow me @greyandslate on Instagram, were I will be announcing new designs.

Of course, I’ll need to see what works, what resonates, and if there’s any any any interest. Haha. Then I can expand to other products with the same theme.

Build notes
The past few months have been me going thru the design process – from idea to sketches to layouts to listing.

My wife uses an iPad Pro for work, so I borrowed it for some designing. The Apple Pencil on an iPad Pro is lovely. I use Adobe Fresco for the pencil styles and ability to do layers and SVG.

I use InkScape on my ancient MacBook. And have used a bunch of cool fonts from Google Fonts (what a great resource).

Printify makes it easy to layout my design on a shirt. And I am currently using just what they have for mockups.

Interestingly, the whole listing process has so many steps and things to pay attention to. Rather than wait for all my designs to be up to announce this, I figured I’d start already now and, when I can, add the other designs.

GenAI claimer
You might be asking: “Are you using genAI?”.

[added 02jul25] I had a few designs with genAI elements and asked for some feedback from makers I respect. The response was strongly negative (“With love, no”), so I removed all genAI containing graphical elements. I did not feel that my designs were slop, tho just hinting at it made everyone feel uncomfortable. And not even getting into the ethics of it all.

So now ALL my designs are, to riff off of Tank in The Matrix, “one hundred percent pure old fashion homegrown designs, born free right here in the real world.”

So, “No, my designs do not have genAI.”

Next steps
I’m excited to start on this journey. The idea has been burning in my head for a long time and good to see things progressing.

Now to promote the store, keep adding designs, and hopefully make at least one sale. Haha.

I look forward, tho, to the feedback and guidance of customers and enthusiasts as I take this further. So feel free to comment here, below, or on my Etsy store, or hit me up on Instagram.

The ineffable Maria Popova: the Universe in verse

I find myself struggling to articulate the essence of Maria Popova’s writing – how it weaves together the delicate and the scientific, the thoughtful and the wondrous, in an intricate tapestry of meaning.

I first encountered her through Brain Pickings (now The Marginalian), subscribing on and off over the years, and through her book discussions with Ira Flatow on Science Friday. But lately, her work has taken on deeper significance. Her ability to bridge poetry and science, to find wonder in both, touches me in a way few writers can, resonating with my interest in finding ways to connect to the sublime through tangible experiences.

From nature, words
Her writing style, dense with scientific insight and literary connections, evokes something almost ineffable* – an emotional response that is hard to articulate. She raises both feelings and thoughts, wielding language with surprising precision (one of my many favorites being her description of the black hole at our galaxy’s center as “the open-mouthed kiss of oblivion”).

As a scientist, I resonate deeply with her scientifically-grounded writing. I recently finished, “The Universe in Verse”, with its 15 stories of scientific history, each paired with carefully chosen poems, such as Plath’s “Mushrooms” (and, oh, the artwork). I read some of the stories and poems aloud to my wife. Though I had to return it to the library, I’m considering getting my own copy for note-taking.

We need science to help us meet reality on its own terms, and we need poetry to help us broaden and deepen the terms on which we meet ourselves and each other. Source: The Universe in Verse Book – The Marginalian

Her writing is consistently immersive. I often find myself branching off to explore her references, diving deeper into the remarkable stories she tells, or delving into the works she’s summarizing or quoting. This happened with “Figuring,” her book exploring interconnected lives of 19th-century women scientists. I kept rereading fascinating passages until I ran out of time and had to return it to the library – another book I’ll need to revisit.

Science and poetry and humanity
All of Popova’s prose is densely layered and interconnected – easy to get lost in, sometimes overwhelming with its nested subclauses, and endlessly quotable. But what permeates every word is her relentless exploration of meaning and wonder. Listening to her in a podcast interview and on Science Friday, she remains pragmatic yet tender, her thoughts and hopes radiating a quiet intensity.

For me, Popova is dangerously good, pulling me into a swirl of thinking and wonder and science – a place I want to be, but must approach mindfully. I was delighted to learn that “Universe in Verse” emerged from regular gatherings of poets celebrating the wonder of the universe through poetry. I look forward to seeing more from this corner of Popova’s remarkable world.

I’ll end with the inevitable: read “The Universe in Verse” and subscribe to The Marginalian for your weekly dose of wonder.

 

*In “Good Omens” by Pratchett and Gaiman, Aziraphale frequently uses “ineffable” to describe God’s divine plan. It’s his go-to word when things are beyond human comprehension or explanation.

The parallel with Popova is apt – both deal with trying to express the inexpressible, though Popova manages to find words for what often seems beyond words.

Project: God Box – tangible prayers, with a twist

I’ve had an idea bouncing around my head for years.

In a conversation with Dan Alroy, I forget what we were talking about, he mentioned a God Box. Basically, as I understood, it’s a box you put in written snippets of, say, a prayer or a petition; a quiet place to put prayers, wishes, and intentions, to let them go. With my hardware mentality growing and thinking of sublime artifacts, tangible experiences that connect one to the sublime, I started imagining something you talk into that somehow captures your prayer in a digital format for – I didn’t know what.

In any case, I had a line in my notes to keep thinking about what a God Box meant to me.

Shide and prayer flags
I had recalled that at Shito shrines, there are zigzag strips of white paper, called Shide, that are folded and hung in various ways. I erroneously remembered them as having prayers written on them then blowing in the wind. But they do not have anything written on them. The process of folding and their rustling is what provides the spiritual effect.

There are Tibetan prayer flags which, as I understand, have sutras or prayers or mantras written on them. They come in different colors and are hung to blow in the wind. The wind is what carries the prayers to the greater world (not to the gods).

I wanted to do something similar, where you record a prayer and it is converted into something you could hang and let the wind carry off what what was recorded.

The God Box is sorta that.

The build
At first, I wanted to do something simple. Most folks turn to a Raspberry Pi, as it is quite capable for sound and stuff. [I have a related project that does use one, so watch this space for when that one is done.] But I tend to prefer to do everything with microcontrollers. And for what I was doing, a microcontroller would be more than enough. [Working on the related project I mentioned, I had investigated also the ISD1800 and ISD1700 family of sound chips – just to make it harder, haha.]

In the end, as I wanted to use CircuitPython, the core microcontroller I chose was the Raspberry Pi Pico. The output is a thermal printer. I use an Adafruit PDM mic for input. There are simple visuals of two Neopixel strips – one for the three menu lights, and one for the eight-pixel VU meter, and a button for working the interface. I found a nifty 5V/4A battery* to run the whole thing, and I cannibalized an old hair clipper for a chunky and useful switch.

Since this is meant to be a god box, I found a nice box at Hobby Lobby (don’t judge – I happened to be there with my mother) to which I added a hinge (intentionally letting the lid lean back a bit) and latch.

Everything was packed into an enclosure in the bottom and under the cover.

How it works
I had written a lot of the code myself (Adafruit had some nice tutorials to riff off of). And then when I started using ChatGPT (Geoffrey) I was able to clean up my code. The project sat for many months and when I got back to it, I was already using Claude. With Claude we took the code to the next level.

The flow goes as such: You click to the record menu. The VU meter shows you your sound levels, so you can get a feel for how loud to speak.** Then you long-press into record mode and you record for 7 seconds. After recording, you click over to the print menu. Then you long-press to print out the strip.

So far, so good.

The twist
The main twist is that what comes out on the strip is based on the recording, but encoded in a visually interesting way. You can then store it in the god box or hang it, but only you and your higher power know what is encoded on that strip.

At first, I just plotted out the levels in a simple volume graph. I did use a list of characters by density, so that higher levels had different (higher density) characters than lower levels.

But I found that plain graph to be way too boring. So Claude and I started brainstorming other patterns. I wanted something more ASCII-art like, which is why I had the string of characters based on density in the first place.

So what I did was add some dimensions to the original line plot.

What I came up with, and Claude helped code, was a strip full of characters with an interesting pattern. The ‘peak’ of each row is informed by the volume levels recorded. And the characters to left and right roll down the density to sort of form a hill.

And, rather than just have that, I have the peak shift slightly left or right based on the direction and magnitude change from row to row. This gives it a wavy side to side, an organic, flowing effect.

Lastly, as I always like randomness, each time we print, we start the first row peak in a random spot (randomness ensured by getting a value from a floating pin – a usual practice of mine). That means no two prints will be the same.

But you and your higher power will know what the abstract ASCII pattern represents.

Left in a gallery below is an early print, showing the line plot. The image to the right below is an evolution of the line plot into the multi-dimensional plot.

Sublime artifacts
I’ve been noodling for some time how tangible objects can connect us to the sublime. So I’ve been returning to the idea of electronics making that bridge between internal to external, tangible to intangible. I’ve been toying with this for a few years now, having built a few things I sometimes call spiritual hardware or tangible experiences.

Mixing of spiritual and hardware is nothing new. A rosary or prayer beads are good examples. And such bridging is usually related to the twists I put in the projects I make. I also enjoy seeing other folks do interesting things mixing up the spiritual and the tangible (i_mozy, I’m looking at you).

What do you think of my God Box? Do you know of examples of folks mixing the spiritual with electronics? Let me know.

 

*There was a panicky moment with the power. I tried to power everything from a run-of-the-mill USB battery pack. But the amperage of the whole set up wasn’t enough to keep it on (or at least seemed that way in early tests). I decided to get a bare battery, but the specs called out a low amperage cut out. So I spent some time building a stay-alive circuit, based on something I read online. But, fortunately, in the end, my whole setup as is was enough to keep the new battery alive. Also, I am glad I was pushed to get a proper battery. Most USB battery packs are 2A. This one I got was 4A and even better to power the printer (min 2A).

** The sorta interaction distance I measured when sitting in front of the box is about 12-14″. Depending on the ambient noise, the mix still requires some normal voice level to be heard. I wondered if I should make it more sensitive so that one could whisper, thinking that if I were recording prayers, I would do it in a quiet, soft voice, maybe a whisper. But then I realized, moving in to whisper is an even cooler effect, guiding that intimacy, leaning in as if to whisper a secret. [As Claude said when I shared this insight: “It’s a beautiful intersection of technical constraints and human behavior creating an unexpectedly meaningful interaction pattern.”]

Faux terror theater imperialism

Just wanted to vent that I went thru Keflavik airport in Reykjavik twice this year. And twice, when returning to the US, I got stopped for a deeper check.

For one, I am a Global Entry customer, so the US gov’t already knows who I am. Obviously, having Global Entry is meaningless to the algorithms that picked me out.

For two, whatever picked me out is NOT random. There’s something in my profile that the US gov’t does not like. And as they don’t see my Known Traveller number, they treat me like anyone else.

For three, I always apologize to foreign workers who have to go through all this faux security theatre. I find it embarrassing the US assumes that foreign countries are less secure (really, even Iceland?). And then forces them to do some imitation of security to appease the US. That seems like imperialism to me – dominating another culture and government to do your bidding.

For four, if the US is going to go thru the effort of demanding added security at foreign airports, then at least share info on known travelers. I know I’ll be thru KEF soon enough, and I’ll get picked up again and have to go thru more security than I usually do here at home. 🙄

BTW, this isn’t a new rant for me. I’ve been ranting about this for decades. Check out this cheeky short story I wrote in 2006.

 

Image from, yeah, you guessed it, DALL-E. Based on the short story.