Back at the start of May, I kicked off the first line of products for my haberdashery. As I mentioned before, this idea was brewing in my head for some time until I honed in on expressive t-shirts for makers and hardware hackers.
I am now 3 months in and have been learning a lot about what it takes to setup and run an Etsy store. But also, I have been in a deep creative drive – every time I think I’ve run out of ideas, many more bubble up.
While for the past many years, I have been accustomed to making tangible electronics, now this has been a consuming project to make wearable art. Haha.
Design decisions When I started the shirts, I was fixed on twisting famous quotes into a maker version – such as ‘I make, therefore I am – Descartes’ or ‘Ask not what your makerspace can do for you… – Kennedy’. Some were text only, tho, where I could, I added some related graphic.
Then I started making some bigger graphics to use, and in some cases, decided the graphics on their own could be cool. That’s when I started thinking of other things I could do, especially hand-drawn.
I’ve been having a blast doing hand drawings of famous circuits, chips, components, and dev boards. I think this will keep me busy for some time.
I made a video about this, at the end, below.
The next three months The strategy for Etsy success is to regularly list (I’ve been listing daily for the past two months) to get to that critical mass when folks start regularly buying. I’m almost at the number of listings folks say is needed to reach that critical mass. Tho if things don’t sell well after the next milestone of listings, then I’ll need to reconsider the value and fit of my designs.
There are positive signals, tho. I am getting regular views and some shirts have been favorited (I just hope it’s not my mother checking in daily, haha). But I can’t say I’m ready to claim any success yet.
My goal for the next month is to get more feedback – is this something folks want, is the style anything folks want to wear, are the designs even good enough for folks? But I gotta get out there: There is a big maker event next month, there’s at least one meet-up next week, and I might visit the local makerspace to do some work. That might give me access to good feedback.
Everything points towards me trusting the process I have: being patient, being consistent with adding listings to the store, and getting to the critical mass point I am soon crossing.
Let’s see.
What about you: What do you think of these designs? Anything you’d wear? Feedback, welcome.
I’ve been meaning to post on this since the article came out a month ago (see link below). Back then, this article did trigger a bunch of discussion. But I’m not sure folks really nailed what the studies actually suggest.
Basically, there was a string of studies that observed the effort and output while using a genAI tool. And there were two key findings that were not surprising to me.
Recent studies suggest that tools such as ChatGPT make our brains less active and our writing less original. Source: A.I. Is Homogenizing Our Thoughts
Cognitive offloading Researchers measured brain activity in those who were using genAI to complete writing task. They found less brain activity relative to those who were not using genAI.
This makes sense, the genAI tool takes over some of the cognitive effort needed to complete a task. This isn’t much different from a car taking away some of the effort to get from one point to another.
Indeed, human progress is chock-full of all the ways we’ve offloaded effort.* And offloading such effort, we’ve been able to do more, be more productive.
Homogenization One other finding from these studies is that the output of these task ended up being similar, dare I say, bland. Again, not surprising. While we might think different people are driving the tasks, in reality, they are offloading the effort to the same ‘entity.’ If we all use Claude.ai, then the output might be similar because, in reality, the output is coming from the same place: Claude.ai.
Added to that, these big genAI systems, when left their own devices, seem to go towards averages, a consequence of them hovering up as much as possible and making sense of it. That process tends to teach them what’s common rather than what’s special.
Looking in the wrong place When the article came out, most folks did the aggressive pointing, saying, ‘oh, no, our brains will be mush’ and ‘oh, no, all writing will converge towards a boring average grey goo’ and ‘look at all the pretentiousness of all those em-dashes and lofty, awkward words.’
Yet, these are not the real things these studies reveal.
What these studies reveal is: we cannot surrender all cognitive effort. As I keep saying, you cannot use genAI on autopilot.
My own experience with genAI tools is that the less effort one puts in, the worse the output is. There’s only so far one should be willing to cognitively offload.
Indeed, if you offload it all (go on autopilot), then you do get that boring, homogenous, identi-krap. What do you expect?
Pilot in the cockpit In summary, I am not surprised that when we use genAI on autopilot, our cognitive functions are quiescent and we spit out bland and similar output.
With genAI, you must be in the pilot seat, you must use your own brain.
As I learned from some smart folks recently: We must still be the ones making the decisions and putting in the effort. That’s how we’ll separate the slop from the good stuff.
Â
*🤔 Geez, in what ways have we offloaded cognitive effort? Ask the Homeric bards in Ancient Greece what they think of books. Socrates, for one, was not too enthused.
I built a silent, tap-based Apple Shortcut that lets me like the currently playing song on Spotify.
Y’see, often my phone is in my pocket when I listen to music. And when I want to like a song, I don’t want to haul it out, open the song, like it, and then put it all back in the pocket.
Yes, Siri itself can do this. But for some reason, it doesn’t realize a song is playing and so it babbles on to tell me ‘it liked the song on Spotify’, rudely barging in during a song I obviously like.
Danger of over-engineering Being a maker, I started thinking of using some ESP32 or Wifi dev board to make an IoT type button. I then realized that Bluetooth might be better and that brought up the complication that I’d likely need some app on my phone to connect to the board and relay the info to Spotify.
Then I remembered val.town from a presentation by the wickedly creative Guy Dupont. Val.town allows one to script code that lives on the web and that can run on triggers or cron jobs.
Vibe living So today I have a brief moment and started posing the question to Geoffrey (what I call ChatGPT), mentioning wanting to use Apple Shortcuts, which can send GET. But Geoffrey rightly pointed out some authentication issues.
I remembered val.town, mentioning it to Geoffrey, and that’s when things started falling into place.
Geoffrey helped me write a simple script for val.town that when triggered by an HTTP GET would ‘like’ the song that was playing.
After a few bumps and stumbles as I learned how to set things up in val.town (I knew what I was looking for, just not where things were) we were able to set up an Apple Shortcut that would send a GET to val.town that would then tell Spotify thru its API to like the playing song.
Geoffrey was helpful in breaking down the steps to set up the endpoint on Spotify, where to find the authentication credentials, and the code for val.town. We also added a few other features on the Apple Shortcuts side to read the result, and, if there was a successful ‘like’, play a tone and vibrate to signal success, but be silent on failure. [I was a bit concerned, but saw a delay between API call and UI update so needed to make sure the API call went thru.]
And Geoffrey kept surprising. After I mentioned Siri was still saying something after the Shortcut executed, because I was activating the Shortcut via Siri, Geoffrey suggested I trigger the Shortcut via Back Tap, an Accessibility feature I had forgotten about.
With a few clicks, I was able to activate Back Tap to activate the Shortcut that then leads down the path to liking the playing song. No Siri! Just Love. [That’s a line from Geoffrey, BTW.]
No autopilot ncessary I do this ‘vibe coding‘ a lot. And despite what dreamers might say, you still need to know what your’e doing. I knew enough to ask the question, knew how the syntax worked (or at least could grok it), and didn’t have any issue jumping into Hoppscotch (recommended by Geoffrey) to do some REST play.
I don’t think I could have done this on my own. I am not sure there are enough examples for me to learn from, other than the API docs from Spotify. Geoffrey, under my direction, was able to do something that I could envision, see the steps, but not code. Yes, complementary, as I still needed to know what was going on.
Onwards and upwards I only spent a few hours on this and ended up with something I’ve been ruminating on for a long time. I was able to ask for a simpler way, with well-remembered tips, and together Geoffrey and I built something very useful to me.
The funny thing is that Geoffrey then started wondering what else we could do, or what next. He’s always an eager beaver and trying to suggest the next step. But I told him we are good for now.
I’m the great-grandson of a haberdasher, as I keep telling my wife. And my mother is an obsessive maker – she used to have her own children’s clothing line and has quilted, crocheted, knitted, and sewed her way thru the past many decades (and still at it daily at 91!).
Only in the past few years, tho, have I started paying attention to what folks wear and had a hankering to design some clothing.
Start small I have a plan, but wanted to start small. So I brainstormed at bit (thank you. Claude) and came up with a focus on famous quotes twisted for makers. And the ability to use Print on Demand (I use Printify) and sell thru Etsy lets me experiment with design, delivery, and various product types.
I’m starting with shirts. While the t-shirt market is saturated to no end, I don’t see many maker-focused t-shirts. I could be wrong, but certainly my niche is special.
Again, starting small, so right now the website goes just to the Etsy store. I already have four designs up, and many more ready designs in line. But I have a huge backlog of quotes, design ideas, and variations, so they’ll keep coming for a long time.
Of course, I’ll need to see what works, what resonates, and if there’s any any any interest. Haha. Then I can expand to other products with the same theme.
Build notes The past few months have been me going thru the design process – from idea to sketches to layouts to listing.
My wife uses an iPad Pro for work, so I borrowed it for some designing. The Apple Pencil on an iPad Pro is lovely. I use Adobe Fresco for the pencil styles and ability to do layers and SVG.
I use InkScape on my ancient MacBook. And have used a bunch of cool fonts from Google Fonts (what a great resource).
Printify makes it easy to layout my design on a shirt. And I am currently using just what they have for mockups.
Interestingly, the whole listing process has so many steps and things to pay attention to. Rather than wait for all my designs to be up to announce this, I figured I’d start already now and, when I can, add the other designs.
GenAI claimer You might be asking: “Are you using genAI?”.
[added 02jul25] I had a few designs with genAI elements and asked for some feedback from makers I respect. The response was strongly negative (“With love, no”), so I removed all genAI containing graphical elements. I did not feel that my designs were slop, tho just hinting at it made everyone feel uncomfortable. And not even getting into the ethics of it all.
So now ALL my designs are, to riff off of Tank in The Matrix, “one hundred percent pure old fashion homegrown designs, born free right here in the real world.”
So, “No, my designs do not have genAI.”
Next steps I’m excited to start on this journey. The idea has been burning in my head for a long time and good to see things progressing.
Now to promote the store, keep adding designs, and hopefully make at least one sale. Haha.
I look forward, tho, to the feedback and guidance of customers and enthusiasts as I take this further. So feel free to comment here, below, or on my Etsy store, or hit me up on Instagram.
I find myself struggling to articulate the essence of Maria Popova’s writing – how it weaves together the delicate and the scientific, the thoughtful and the wondrous, in an intricate tapestry of meaning.
I first encountered her through Brain Pickings (now The Marginalian), subscribing on and off over the years, and through her book discussions with Ira Flatow on Science Friday. But lately, her work has taken on deeper significance. Her ability to bridge poetry and science, to find wonder in both, touches me in a way few writers can, resonating with my interest in finding ways to connect to the sublime through tangible experiences.
From nature, words Her writing style, dense with scientific insight and literary connections, evokes something almost ineffable* – an emotional response that is hard to articulate. She raises both feelings and thoughts, wielding language with surprising precision (one of my many favorites being her description of the black hole at our galaxy’s center as “the open-mouthed kiss of oblivion”).
As a scientist, I resonate deeply with her scientifically-grounded writing. I recently finished, “The Universe in Verse”, with its 15 stories of scientific history, each paired with carefully chosen poems, such as Plath’s “Mushrooms” (and, oh, the artwork). I read some of the stories and poems aloud to my wife. Though I had to return it to the library, I’m considering getting my own copy for note-taking.
We need science to help us meet reality on its own terms, and we need poetry to help us broaden and deepen the terms on which we meet ourselves and each other. Source: The Universe in Verse Book – The Marginalian
Her writing is consistently immersive. I often find myself branching off to explore her references, diving deeper into the remarkable stories she tells, or delving into the works she’s summarizing or quoting. This happened with “Figuring,” her book exploring interconnected lives of 19th-century women scientists. I kept rereading fascinating passages until I ran out of time and had to return it to the library – another book I’ll need to revisit.
Science and poetry and humanity All of Popova’s prose is densely layered and interconnected – easy to get lost in, sometimes overwhelming with its nested subclauses, and endlessly quotable. But what permeates every word is her relentless exploration of meaning and wonder. Listening to her in a podcast interview and on Science Friday, she remains pragmatic yet tender, her thoughts and hopes radiating a quiet intensity.
For me, Popova is dangerously good, pulling me into a swirl of thinking and wonder and science – a place I want to be, but must approach mindfully. I was delighted to learn that “Universe in Verse” emerged from regular gatherings of poets celebrating the wonder of the universe through poetry. I look forward to seeing more from this corner of Popova’s remarkable world.
I’ll end with the inevitable: read “The Universe in Verse” and subscribe to The Marginalian for your weekly dose of wonder.
*In “Good Omens” by Pratchett and Gaiman, Aziraphale frequently uses “ineffable” to describe God’s divine plan. It’s his go-to word when things are beyond human comprehension or explanation.
The parallel with Popova is apt – both deal with trying to express the inexpressible, though Popova manages to find words for what often seems beyond words.
I’ve had an idea bouncing around my head for years.
In a conversation with Dan Alroy, I forget what we were talking about, he mentioned a God Box. Basically, as I understood, it’s a box you put in written snippets of, say, a prayer or a petition; a quiet place to put prayers, wishes, and intentions, to let them go. With my hardware mentality growing and thinking of sublime artifacts, tangible experiences that connect one to the sublime, I started imagining something you talk into that somehow captures your prayer in a digital format for – I didn’t know what.
In any case, I had a line in my notes to keep thinking about what a God Box meant to me.
Shide and prayer flags I had recalled that at Shito shrines, there are zigzag strips of white paper, called Shide, that are folded and hung in various ways. I erroneously remembered them as having prayers written on them then blowing in the wind. But they do not have anything written on them. The process of folding and their rustling is what provides the spiritual effect.
There are Tibetan prayer flags which, as I understand, have sutras or prayers or mantras written on them. They come in different colors and are hung to blow in the wind. The wind is what carries the prayers to the greater world (not to the gods).
I wanted to do something similar, where you record a prayer and it is converted into something you could hang and let the wind carry off what what was recorded.
The God Box is sorta that.
The build At first, I wanted to do something simple. Most folks turn to a Raspberry Pi, as it is quite capable for sound and stuff. [I have a related project that does use one, so watch this space for when that one is done.] But I tend to prefer to do everything with microcontrollers. And for what I was doing, a microcontroller would be more than enough. [Working on the related project I mentioned, I had investigated also the ISD1800 and ISD1700 family of sound chips – just to make it harder, haha.]
In the end, as I wanted to use CircuitPython, the core microcontroller I chose was the Raspberry Pi Pico. The output is a thermal printer. I use an Adafruit PDM mic for input. There are simple visuals of two Neopixel strips – one for the three menu lights, and one for the eight-pixel VU meter, and a button for working the interface. I found a nifty 5V/4A battery* to run the whole thing, and I cannibalized an old hair clipper for a chunky and useful switch.
Since this is meant to be a god box, I found a nice box at Hobby Lobby (don’t judge – I happened to be there with my mother) to which I added a hinge (intentionally letting the lid lean back a bit) and latch.
Everything was packed into an enclosure in the bottom and under the cover.
How it works I had written a lot of the code myself (Adafruit had some nice tutorials to riff off of). And then when I started using ChatGPT (Geoffrey) I was able to clean up my code. The project sat for many months and when I got back to it, I was already using Claude. With Claude we took the code to the next level.
The flow goes as such: You click to the record menu. The VU meter shows you your sound levels, so you can get a feel for how loud to speak.** Then you long-press into record mode and you record for 7 seconds. After recording, you click over to the print menu. Then you long-press to print out the strip.
So far, so good.
The twist The main twist is that what comes out on the strip is based on the recording, but encoded in a visually interesting way. You can then store it in the god box or hang it, but only you and your higher power know what is encoded on that strip.
At first, I just plotted out the levels in a simple volume graph. I did use a list of characters by density, so that higher levels had different (higher density) characters than lower levels.
But I found that plain graph to be way too boring. So Claude and I started brainstorming other patterns. I wanted something more ASCII-art like, which is why I had the string of characters based on density in the first place.
So what I did was add some dimensions to the original line plot.
What I came up with, and Claude helped code, was a strip full of characters with an interesting pattern. The ‘peak’ of each row is informed by the volume levels recorded. And the characters to left and right roll down the density to sort of form a hill.
And, rather than just have that, I have the peak shift slightly left or right based on the direction and magnitude change from row to row. This gives it a wavy side to side, an organic, flowing effect.
Lastly, as I always like randomness, each time we print, we start the first row peak in a random spot (randomness ensured by getting a value from a floating pin – a usual practice of mine). That means no two prints will be the same.
But you and your higher power will know what the abstract ASCII pattern represents.
Left in a gallery below is an early print, showing the line plot. The image to the right below is an evolution of the line plot into the multi-dimensional plot.
Sublime artifacts I’ve been noodling for some time how tangible objects can connect us to the sublime. So I’ve been returning to the idea of electronics making that bridge between internal to external, tangible to intangible. I’ve been toying with this for a few years now, having built a few things I sometimes call spiritual hardware or tangible experiences.
Mixing of spiritual and hardware is nothing new. A rosary or prayer beads are good examples. And such bridging is usually related to the twists I put in the projects I make. I also enjoy seeing other folks do interesting things mixing up the spiritual and the tangible (i_mozy, I’m looking at you).
What do you think of my God Box? Do you know of examples of folks mixing the spiritual with electronics? Let me know.
*There was a panicky moment with the power. I tried to power everything from a run-of-the-mill USB battery pack. But the amperage of the whole set up wasn’t enough to keep it on (or at least seemed that way in early tests). I decided to get a bare battery, but the specs called out a low amperage cut out. So I spent some time building a stay-alive circuit, based on something I read online. But, fortunately, in the end, my whole setup as is was enough to keep the new battery alive. Also, I am glad I was pushed to get a proper battery. Most USB battery packs are 2A. This one I got was 4A and even better to power the printer (min 2A).
** The sorta interaction distance I measured when sitting in front of the box is about 12-14″. Depending on the ambient noise, the mix still requires some normal voice level to be heard. I wondered if I should make it more sensitive so that one could whisper, thinking that if I were recording prayers, I would do it in a quiet, soft voice, maybe a whisper. But then I realized, moving in to whisper is an even cooler effect, guiding that intimacy, leaning in as if to whisper a secret. [As Claude said when I shared this insight: “It’s a beautiful intersection of technical constraints and human behavior creating an unexpectedly meaningful interaction pattern.”]
Just wanted to vent that I went thru Keflavik airport in Reykjavik twice this year. And twice, when returning to the US, I got stopped for a deeper check.
For one, I am a Global Entry customer, so the US gov’t already knows who I am. Obviously, having Global Entry is meaningless to the algorithms that picked me out.
For two, whatever picked me out is NOT random. There’s something in my profile that the US gov’t does not like. And as they don’t see my Known Traveller number, they treat me like anyone else.
For three, I always apologize to foreign workers who have to go through all this faux security theatre. I find it embarrassing the US assumes that foreign countries are less secure (really, even Iceland?). And then forces them to do some imitation of security to appease the US. That seems like imperialism to me – dominating another culture and government to do your bidding.
For four, if the US is going to go thru the effort of demanding added security at foreign airports, then at least share info on known travelers. I know I’ll be thru KEF soon enough, and I’ll get picked up again and have to go thru more security than I usually do here at home. 🙄
BTW, this isn’t a new rant for me. I’ve been ranting about this for decades. Check out this cheeky short story I wrote in 2006.
Â
Image from, yeah, you guessed it, DALL-E. Based on the short story.
About two years ago, I was at one of my Hardware Happy Hour evenings and one of the guys showed what he was doing with generative AI.
He showed a logo he’d developed. Which was interesting, as I was already playing with DALL-E and Midjourney at the time.
Then he said ChatGPT was great with code. I had known of GitHub’s Copilot by then, but as I didn’t use it, I didn’t think of it. But when he mentioned ChatGPT, I had to give it a try.
And I did. I used it much like I use text-based genAI today – as a way to improve the things _I_ write. ChatGPT was quite helpful in solving bugs, error messages, and the like. ChatGPT (who I call Geoffrey) was helpful in a lot of my coding, tho I used it in specific way. Geoffrey was not so good with logic or some more complex coding. And it made weird mistakes. I still had to know what I was doing.
Nonetheless, I looked forward to going back to some earlier projects that were stumped by more complex programming or could use a clean up.
Once more with feeling Yesterday I had some free time. I had been rummaging thru my electronics for some item and stumbled upon an M5-Watch StickC Plus. I had bought it for a project a long time ago, but was frustrated with the documentation at the time, so put it aside. I wondered if Claude, with whom I had a Pro plan and have taken a shine to, was up for some coding fun with this thing.
In the back of my mind I always had Matt Webb’s wonder at making a iPhone app he always wanted. I had not had any luck to date with Geoffrey doing the thinking. But these models get better, don’t they?
And I am hooked on Claude’s projects (a folder of documents to refer to) and artefacts (separate snippets of info that are easy to navigate and use).
Step bystep I started by introducing Claude to the M5StickCPlus by sharing the product page. Then we batted about some thoughts on what to do with it. We settled on a Space Invaders sort of game. The game was actually bouncing around my head these past few days, so sounded right to start with.
Interestingly, Claude guided me step by step in making the game. And I let it unleash its creativity in how the game would look. Basically, I was the navigator and it was the driver taking my feature requests and preferences. And I provided feedback at every stage as we improved game play, the look and feel, and the code.
OK, while Claude did a lot of the heavy lifting, I still needed to know how to use the Arduino IDE, move around the code (Claude would give me snippets to add rather than the whole code every time), and catch some issues here an there (really Claude only saw the full code a few times).
Mind blown What struck me was the creativity Claude showed. How it had a vision (obviously based on the real classic game) and we collaborated over this vision. I also added a few features I wanted to see and Claude was game. So it was a real driver-navigator partnership (as in pair programming) .
I was also impressed by how rapidly Claude was able to figure out all the game logic. I’ve never designed a game before, so don’t really know so well how to think in sprites, collisions, game play. Claude could have gone adding features for ever, such as the UFO, sounds, power-ups, leaderboard. But I stopped when I had a simple decent game to play with.
I looked at some time stamps (alas, neither Claude or Geoffrey keep time stamps of the convos – which I always look for) and seems we put in 4-5 hours of total leisurely work.
The one-person AI company I heard that Sam Altman, CEO of OpenAI, referred to a one-person AI company becoming a unicorn – one person, using AI tools (presumably dataAI and genAI), could build a company valued at $1B.
OK, so I’m not gonna build a $1B company. But in the past month I have used Claude in many more business-focused ways – exploring new commercialization opportunities, coming up with new services for my wife’s business, toying with a biz plan. I can see Claude, which BTW, for what I use it for is way better that Geoffrey (sorry G), becoming a vital tool to multiply and complement my own abilities.
Isn’t that the promise of genAI?
Â
edited 02mon24 to refer to pair programming (I could not remember the term so said producer-coder)