How many people were affected by the CrowdStrike meltdown?

How many billion people do you think were affected by this?

Microsoft said 8.5 million PCs (no Macs of course).

A tiny 42KB file took down 8.5 million machines. from: Inside the 78 minutes that took down millions of Windows machines – The Verge

But I can’t seem to find any number on the number of people.

How many millions? Billions?

I’m in that number. We happened to have to check into a hotel the night. In addition to writing things down (and the credit card number!) on paper, they needed to walk us to our rooms to let us in with a master key, too). So, yes, me and mine and the rest of the wedding guests were affected.

What about you?

 

Image from Verge article

Phone mirroring – something I did on my Nokia S60 almost 20 years ago

[sarcastic clap clap clap]

OK, I truly don’t know if this is a new thing or if Apple is Sherlocking some poor developer. But, congratulations Apple for releasing one more feature that I’d used forever ago.

In short, in the new iOS and MacOS, one cam mirror ones phone on their Mac, clicking and doing stuff, from the comfort of your keyboard.

Instead of a separate device, your iPhone is now just an app on your Mac. There’s a lot left to finish and fix, but it’s a really cool start.

Source: Phone mirroring on the Mac: a great way to use your iPhone, but it’s still very much in beta – The Verge

Been there, done that
There was a very talented Series 60 developer (did I give him an award during the first (and only?) Series 60 Platform awrds?) with a range of useful apps (yes, we had apps long before Apple popularized them).

One of the apps was indeed an app to mirror your phone on our laptop. Really nifty and I used it all the time.

That had to be around 2004-2005. I don’t recall. I left the S60 world in 2004.

Yeah, I have a long list of things that Nokia did back then that somehow Apple gets all the glory for. Tho, to be fair, Apple was the one that enthused folks and inspired them to engage, so they deserve all the glory.

 

Image from The Verge

Ford chief says Americans need to fall ‘back in love’ with smaller cars – duh

Jim Farley says country is ‘in love with these monster vehicles’ but big cars are not sustainable in the age of EV

Source: Ford chief says Americans need to fall ‘back in love’ with smaller cars | Automotive industry | The Guardian

Thanks, Jim. Always nice when a big guy like you says the same as li’l ol’ me.

Indeed, I think the trend of the past few years of larger SUVs and trucks actually has given folks the wrong expectation of what cars should be as we enter the EV-era.

Source: Make. Smaller. Cars. | Molecularist (17nov23)

 

Image from Guardian article

AI in the physical world

I’ve always been straddling the physical and the digital – thinking of how the two worlds interact and complement each other, and what it means for us stuck in the middle. And, in the last few years, thanks to price, ease of use, tools, and communities, I have become more hands-on mixing the physical and digital (and sublime) worlds.

Being both in the digital and physical has also led me to think of data, analytics, data fluency, sensors, and users (indeed, also helping others think and do in these areas, too). ML and AI, predictive analytics and optimization, and the like were all part of this thinking as well. So, with much interest, in the last two or so years I’ve been dabbling with generative AI (no, not just ChatGPT, but much earlier, with DALL-E and Mind Journey).

Mixing it
In my usual PBChoc thinking, I started wondering what would be the fusion of the physical and these generative AI tools. And, despite spending so much of my life writing, I could not articulate it. I tend to sense trends and visualize things long before I can articulate them. So I read and listen for those who can help me articulate.

I wrote recently about ‘embodied AI‘ – the concept of AI in the physical world. Of course, folks think humanoid robots, but I think smart anything (#BASAAP). Now I see folks use the term ‘physical AI’.

New something?
Not sure how I missed these guys, but I stumbled upon Archetype.ai. They are a crack team of ex-Google smarties who have set off to add understanding of the physical world to large transformer models – physical AI.

At Archetype AI, we believe that this understanding could help to solve humanity’s most important problems. That is why we are building a new type of AI: physical AI, the fusion of artificial intelligence with real world sensor data, enabling real time perception, understanding, and reasoning about the physical world. Our vision is to encode the entire physical world, capturing the fundamental structures and hidden patterns of physical behaviors. from What Is Physical AI? – part 1 on their blog

This is indeed what I was thinking. Alas, so much of what they are talking about is the tech part of it – what they are doing, how they are doing, their desire to be the platform and not the app maker.

At Archetype, we want to use AI to solve real world problems by empowering organizations to build for their own use cases. We aren’t building verticalized solutions – instead, we want to give engineers, developers, and companies the AI tools and platform they need to create their own solutions in the physical world. – from What is Physical AI? – part 2 on their blog

Fair ‘nough.

And here they do make an attempt to articulate _why_ users would want this and what _users_ would be doing with apps powered by Newton, their physical AI model. But I’m not convinced.

Grumble grumble
OK, these are frakkin’ smart folks. But there is soooo much focus on fusing these transformer models to sensors, and <wave hands> we all will love it.

None of the use cases they list are “humanity’s most important problems”. And of the ones they list, I have already seen them done quite well years ago. And I become suspect when use cases for a new tech are not actually use cases that are looking for new tech. Indeed, I become suspect when the talk is all about the tech and not about the unmet need that the tech is solving.

Of course, I don’t really get the Archetype tech. Yet, I am not captivated by their message – as a user. And they are clear that they want to be the platform, the model, and not the app maker.

Again, fair ‘nough.

But at some level, it’s not about the tech. It’s about what folks want to do. And I am not convinced they are 1) addressing an unmet need for the existing use cases they list; 2) there isn’t any of their use cases listed that _must_ use their model, a large revolutionary change sorta thing.

Articulate.ai
OK, so I need to think more about what they are building. I have spent the bulk of the last few decades articulating the benefits of new tools and products, and inspiring and guiding folks on how to enjoy them. So, excuse me if I have expectations.

I am well aware these past few decades that we are instrumenting the world, sensors everywhere, data streaming off of everything, and the need for computing systems to be physically aware.

I’m just not sure that Archetype is articulating the real reason for why we need them to make sense of that world using their platform.

Hm.

Watch this space.

Image from Archetype.ai video

Now that genAI remembers so well, I’d like a bit of forgetfulness

When I started using genAI tools like ChatGPT (whom I call Geoffrey), the tools could not remember what was said earlier in the thread of a conversation. Of course, folks complained. And to be fair, if you’re doing a back and forth to build up an output or an insight, having some sort of memory of the thread would be helpful.

Eventually, all the chat genAI tools did start remembering the thread of the chat you’d be in. And I like that, as I have long-going threads I get back to to further elaborate, update, or return to a topic thread.

On topic in an off way
Then, all of a sudden I started seeing a “memory updated” from Geoffrey after I would say certain assertions about me. Tho I am still trying to find out what triggers this, because for sure, sometimes it updates a memory exactly when I _don’t_ want it to remember something.

What’s more, I tend to have various different threads going and sorta like to keep them separate. I like to keep them separate as some topics are best explored in their own silo, mostly so the ideation isn’t affected by something I didn’t want to influence the ideation with (focus!).

So, one day, when I was in a special thread I set up so that I could ideate off a clean slate, I noticed the answer not only was very similar to an answer on another thread, I felt that the other thread was influencing the current thread (which I didn’t want).

As a test, I asked Geoffrey “what do you think is my usual twist to things?” And it replied correctly in the context of the ideation thread we were discussing. To be fair, the topic was in the same area as a few other threads. But for me, a key thing in ideation is to not get held back by previous ideas.

As an aside, one other feature that is gone: back in the day (like earlier this year), if you asked a genAI tool the same thing, you’d get a different answer. I think the memory is starting to make these tools reply the same.

On topic in an off way
And this extra knowledge and memory isn’t just with ChatGPT. At work, I use Microsoft Copilot. One of the incarnations (there are many, spread amongst the Office apps), with a browser interface, can access ALL my documents in SharePoint, and the corporate SharePoint repositories, and all my emails.

That can be useful when creating something or needing to find something. But this can be a pain when you want Copilot to focus on just one thing.

For example, I wanted it to summarize a document I had downloaded. I told it to only use the document for the summary. But then, in the summary, it started looking across our repositories and email and the summary was a bit skewed by that info.

On topic in an off way
I do believe that memory of some sort is very useful for genAI. And the ability to have a repository of ever-changing data to look up, is also great.

But I think we’ve swung the whole other way: from something with a very short-term memory, to something that now remembers too much and no longer knows what’s relevant to remember.

I am sure in your daily life, you’ve had to tell someone, “thank you for remembering that, but that is not relevant right now.” Or, “thank you for remembering that, but I’d like us to come to this problem with a pristine mind and think anew, not rehash the old.”

On topic in an off way
So we should be careful what we wish for. We got the memory ability we wanted. Now all I am asking is to let me tune a bit of forgetfulness or focus. [Krikey, it can be just in the prompt: “forget this” “use just this thread” or something like that.]

Am I being persnickety? Or is this something that still needs better tuning?

Whimsey, whimsey, whimsey: I love this

Life is too short. I love seeing folks mixing things up like this.

I am a simple midwesterner living in the middle of New York City. I put my shoes on one at a time, I apologize when I bump into people on the street, and I use AI inference (https://universe.roboflow.com/test-y7opj/drop-of-a-a-ha) to drop hats on heads when they stand outside my apartment. Like anybody else. From: “I am using AI to automatically drop hats outside my window onto New Yorkers” [via Adafruit]

Video from I am using AI to automatically drop hats outside my window onto New Yorkers [I didn’t know how to embed from his site, sorry]

What’s the fascination with humanoid robots?

I was reading an interesting article on the fusion of robots and LLMs (see link below). One concept in the article that caught my attention was ’embodied AI’ – that current AI is ‘disembodied’ but once you ’embody’ it, the AI can learn about the world in the same way as living creatures do.

Well, not ‘living creatures’ but ‘humans’ is what article focuses on.

Dr Kendall of Wayve says the growing interest in robots reflects the rise of “embodied ai”, as progress in ai software is increasingly applied to hardware that interacts with the real world. “There’s so much more to ai than chatbots,” he says. “In a couple of decades, this is what people will think of when they think of ai: physical machines in our world.” As software for robotics improves, hardware is now becoming the limiting factor, researchers say, particularly when it comes to humanoid robots (from: Robots are suddenly getting cleverer. What’s changed?)

Hands on my body
I like ’embodied AI’ as it touches on some thoughts I’ve been having on connecting an AI with some action in the physical world. I think folks can be unfair saying that some AI is stupid, when the poor AI not only doesn’t have any connection to the physical world, but they never had the benefit of millions of years of evolution _in_ that physical world (for example, see my comment here).

So, yeah, I suppose the biologist in me groks why embodiment could do wonders for AI learning.

But then, with ‘light is over here’ lazy thinking, folks start wanting AIs to be human-smart and the navigate the world like humans. Because the world is built for humans.

Hm, the biologist in me asks ‘what’s the fascination with humanoid robots?’

BASAAP all the way down
The biologist in me sees humans as a single species among millions. And one answer to being in the real world.

Rather than say, ‘let’s make humanoid robots,’ we should be first asking ourselves (just like nature asks every day) what is the need at hand and what best addresses that need. Nature has exploded into millions of species, each evolved for their needs. The same should be for AIs embodied in robots.

Indeed, I claim that there are many tasks that humans do that would be better suited for something of a very different shape and form. Especially if that thing were clever.

For example, I wonder if a horse’s intelligence would be better for a car than a human intelligence.

It, robot
It is not that I don’t believe in humanoid robots. It’s just that I think most folks jump to humanoid robots without asking if humanoid is the right form factor.*

Decades ago** I learned the term ‘horses for courses’ – each horse has the course it is best suited for.

While I think embodied AI is a great thing to do, I just hope folks realize that embodiment can take many forms (geez, just think of all the forms manufacturing bots have).

We don’t need to ape God and make robots in our own likeness all the time. 🙄

 

** Frakkin’ heck, how many times in Star Wars was either C3PO or R2D2 absolutely not well suited for the environment they were in? What about Daleks (sorta)?
**Earliest reference in this blog back in 2005, tho I remember already using it in my writing in 1999.

Will more free time destroy the world?

I had a brainwave [coincidentally when on my own free time holiday]. I had listened to a series of articles on long work hours in China, changes in American work hours versus others, and investigation into the national psyches behind national days. After all the talk about work vs leisure I realized that the general trend for work vs leisure time is less work and more leisure.

Industrialization, automation, AI – each wave of tech promised us we’d work less and have more time for our own pursuits. Indeed, I feel that as we make tools to take over our work load, we’ll have more attention to walk up Maslow’s Hierarchy.

Beware of what you ask for
Then, I thought back to some holidays I had, like Florence, where everything was mobbed and the sights were overrun and oversubscribed. If here in the US we gripe about travel during Memorial Day and Thanksgiving, I can’t image the gripes about travel during Chinese New Year.

Let’s just say Europe and North America start doing a three-day work week. And that the US starts making 5 weeks off normal (like lots of Europe).

Where are all the people going to go? Dubrovnik (pictured above) is always my example of over-tourism.

What if we have over-tourism all the f-ing time?

Yeah, free time is gonna kill the world. Let’s all just stay shackled to our desks.

😜

What do you think?

Ghost’s manifesto is a call to take back the graph, take back the cloud

Years ago I was trying to find away to keep folks connected across all their various digital streams. At the same time, the large Platforms were kicking off their meteoric rise that came to define the internet for the next 20 or so years. As you have no idea what I was doing back in 2007, you know who succeeded.

In the last few years, tho, people have soured on the way the internet has been built and all eyes are on the Fediverse. The Fediverse is the return to the morselization of the internet and a way to take back our social graph, take back our things that are in the cloud.

Way back when I said:

Do you want to have full control over your data, your social graph, your communications, just like you do now with your mobile phone? Source: Take back the graph! Facebook, The Cloud, and a return to the basics of social networking – Molecularist

Now, Ghost, a newsletter platform whose star has risen after the debacles at Substack, has published a manifesto of sorts in support of ActivityPub, the leading protocol for building the Fediverse.

As the article says:

This has long been the dream, and it seems like the platforms betting on it in various ways — Mastodon, Threads, Bluesky, Flipboard, and others — are where all the energy is, while attempts to rebuild closed systems keep hitting the rocks. Source: Newsletter platform Ghost adopts ActivityPub to ‘bring back the open web’ – The Verge

Alas, I really haven’t dabbled in all this Fediverse stuff. Yes, I have been banging this drum (or a version of this drum) for a while. But my head (and communities) are elsewhere these days. So, while all this is exciting for me, the Fediverse is not not really part of my life.

Maybe some other time.

Tho this quote captures the expectations I and other have as Fediverse takes flight.

That’s the fun part about the fediverse — it’s a lot of old ideas about the web being open and interoperable, but there’s still a lot of new things yet to be invented on top of that foundation. At this point I’m not sure any social platform that launches without an eye towards federation stands a chance, really. Source: Newsletter platform Ghost adopts ActivityPub to ‘bring back the open web’ – The Verge

Banning foreign-owned companies isn’t unprecedented in the US, TikTok is just one of the latest

The whole TikTok-banning story has been interesting to me. I understand that there is little faith in dominant foreign-owned orgs that either sit on our networks or in our social circles.

For me, because it’s in my face often in my daily work, I am reminded of Merck. The US cleaved off the US branch of Merck during WWI, leaving us with 100 years of a US Merck and a German Merck.

More recently, the US government has placed restrictions on Huawei and Kaspersky.

How will this TikTok story progress?

Having lost its fight in Congress, TikTok faces a tough battle in US courts and with China’s own export controls. Source: TikTok has a tricky legal case to make against the ban law – The Verge

The sad thing is, at the heart of this is Trust. China’s behaviors, which I will point out are of a different sort than Russia or Korea, have generated a high level of distrust across the world.

Indeed, there are other countries with various bans on TikTok.

And it doesn’t help China itself bans a slew of foreign-owned internet services.

Banning TikTok, tho, will be interesting. This isn’t some hidden tech, but a leading social network platform in the hands of millions.

I am expecting the eventual outcome to be a banning on government devices, the formation of a US-owned subsidiary keeping US data in the US (à la Merck), and a string of retaliations by China.

What do you think?

[BTW, is DJI next in line?]