ISAMI meetup April 26, 2017, Column Health Somerville

Hello folks,

Next ISAMI meetup is scheduled!

This month, Emily Lindmer and Vincent Valant, from Hey,Charlie will talk about their app, how they got to where they are, and where they are heading.

Hey,Charlie is a mobile app for opiate addicts who are in recovery or are seeking recovery. The core of the app help addicts manage the telephone numbers of people who are good or bad influences, providing awareness and encouragement of the right behaviors and relationships.

Come to the meetup to hear the whole story!

Date: Wednesday, April 26, 2017
Time: 6:30pm-8:30pm
Column Health Somerville
401 Highland Ave.
Somerville, MA 02144

Please spread the word. Our challenge to you: bring a buddy along.

Best regards,


Follow us Twitter at @ISAMI_Boston
Join our LinkedIn group – “Innovative Solutions in Addiction and Mental Illness”

We should all rethink the level of access of our mobile devices

A recent second phone theft in the family prompted me to revisit the extent to which the iPhone is secure from unwanted access. The key security hole is the lock screen. The last major update to iOS created a radically new lock screen.

Addicted to the locked screen
I will admit that I am a heavy user of my phone when it is locked. I regularly ask Siri for things, check calendar and other items in the Today section, review notifications, read and reply to messages, and manage phone settings and music in the Control Center. Indeed, that’s almost everything that I know one can do while the phone is locked.

I recently upgraded to the new 7 and activated Apple Pay in my lock screen – a press of the home button and all the payment options pop up. A nice quick access.

But seeing how easily my cards popped up gave me the heebeegeebees, so I turned it off.

And then my daughter had her phone nicked out of her coat.

Going dark. Easily.
Apple has some good phone finding and locking capabilities. When I got the text from her that her phone had been stolen, I got onto Find Phone, tried to locate it, sent the Lock command and note. Only thing, the phone was dark.

OK, so, yes, the phone can be turned off without knowing the lock code. But a bit more troublesome, the phone can be taken offline by raising the Control Center and going into airplane mode. That means that the phone can be futzed with while it is offline. If the phone could not be placed offline, then at least the cellular data would allow some communication and location.

What else?
As I was texting with my daughter, I was concerned that the thieves would be able to see my messages. That’s when she told me that she turned off the message notification on the lock screen for her own privacy.

A quick look online showed that many folks realized that putting so many things on the lock screen has presented a privacy and security risk for users (this is a good article to read).

Ubiquitous computing headaches
I’ve been reading a lot about the proliferation of internet-connected devices (aka IoT, but I knew it back in the day as ubiquitous computing). One common alarm is the low-level security leaving the barn door wide open for hackers. Though often, folks compare washer machine and thermostat risks to the security of phones and computers.

Not so fast.

We need to assess the security of our phones and computers as well, in their mobile context. And users are not equipped to understand how to get to a secure state. Though, I am sure millennials, if you tell them to lock down the privacy of their phones and computers away from their nosy friends and family, can figure out all the ways to keep private.

More serious options
The ability to turn off the phone without a passcode is an achilles heel. The remote locking and wiping of the phone is good, but perhaps there needs to be something more when someone tries to either reuse parts (which I think is the usual fate of these locked phones) or connects to iTunes. Indeed, my wife wishes there was a halt and catch fire, yes, catch fire, to spite the thief. Perhaps we should think of the whole lost/theft experience: how can we counter the fishing attempts to get the iCloud info to unlock the phone, how do we make it easy for carriers and Apple to be aware of the theft, how do we make it easy to know the IMEI and other identifying info after theft?

An experiment
I have now turned off anything that might show up or use my lock screen. As a heavy user, I want to see the impact on my usage, figure out the balance between privacy/security and usefulness.

Also, I’ll let the rest of my family know about these privacy holes, including passwords on computers and phones.

Already I’m missing Siri: I was on a run and could not control the music player, or find out who was calling or messaging me.* And around the house, I can’t just holler to Siri for some info or what. I wonder if there’s a quick way to turn it on and off on the lock screen, but, it’s really not much of an assistant.

What about you?
Do you have a story of being lulled into a security breach while using the lock screen of your phone? What do you do to stay private?

*Hm, now that Apple Watch really seems useful as a second screen.

Image from ZDNet

An ancient memorization strategy and becoming a Mentat

I was an avid reader of Frank Herbert’s Dune series of novels. One interesting thread in the books was that at some point, long before the start of the first novel, humans revolted against thinking machines (and in Herbert’s politico-religio-scientific melange, he called it the Butlerian Jihad). A response to the destruction of all the thinking machines was the Mentat, a human trained and drugged to replace computer thinking and feats of calculation.

The concept has alway fascinated me. And when I think of all the things the mind has been shown to do I can’t help but think that we can indeed map what a modern-day Mentat might be able to do.

Remember well
Have you ever read an ancient epic poem, such as Homer’s Iliad? The Iliad, like many other ancient epic poems, was initially an oral poem, passed on from person to person, long before it was a written poem. While we think of this as a feat of memory, clearly this is something we see in other areas with people who can remember Pi to many digits, pianists who can play long orchestral concerts, and little kids who can memorize cards before they can read.

A recent article in The Verge mentions a study of “loci,” a method also knows as the “memory palace,” where a mental map of places is used to remember objects. Indeed, this process might affect the brain.

“It shows that superior memory on that level is not something that is just inborn talent, but is something that essentially can be learned by everyone”

Source: An ancient memorization strategy might cause lasting changes to the brain – The Verge

When I hear that techniques like this one actually cause changes to the brain, I start thinking again of Mentats.

For example, I have heard tales of savants who can make highly detailed drawings in a distributed fashion, the final drawing only revealing itself as the patches grow and connect. Or the folks who can name the day of the week if you give them a date, or, even, remember a day completely if you give them a date. Or how about folks who can calculate large numbers instantly?

These abilities are in our brain and technically we should be able to train for them. My one concern is whether these Mentat-like abilities and our neurotypical abilities are mutually exclusive, sort of like an autism spectrum.

Pulling it all together?
One last thing: Adderall is a common drug to treat attention deficit and hyperactivity disorder. But it also a coveted college studying drug, as it seems to help one concentrate and focus (and if you catch The Expanse, you’ve seen the Martians use something similar).

What other drugs out there allow us to tap into our mental skills? How can we start training our brain for feats of memorization, calculation, recitation?

For sure, these capabilities are out there and we have many examples. But can we pull them together and give someone the wide-ranging computational and inference abilities of a Mentat?

What do you think?

Image from The Verge

Let’s make 2017 the Year of “Prove it” in healthcare innovation

Mahek and I have a running conversation on big company meltdowns (mostly in healthcare). For each one, we discuss who was involved (personalities, investors, consumers), what was the promise and hype, what was the disconnect with reality, and what triggered the ‘oh shit, this is krap’ moment for all.

Of course, at the top of our list is Theranos. But there were other companies who claimed big, grew fast, became famous, and then bombed.

Is this just failure to deliver or is there a more insidious problem at work? Erin Griffith wrote an insightful article on fraud in Silicon Valley. She writes about a long list of companies who took investors along for a ride, with a mix of bluster and swagger, often with catastrophic side effects to the industry and the people invovled.

And part of me wants to believe that it’s deliberate fraud. But I like to give the benefit of the doubt, and think that what comes into play is a wishful thinking that then gets locked in and forces the company to claim the wishful thinking is true. Kinda like a white lie turning into a smoking black grease of a lie that sticks to everyone and everything and can’t be removed.

I’ve seen it up close.

An antidote to this potential fraud is actually proving your solution works as advertised. No, it’s not enough to have customers, as they can also be hoodwinked by the hype; keep in mind Theranos had a customer: Walgreens, not shabby. No, it’s not enough to have good funding; Theranos had solid funding, though from many folks with no experience in healthcare. No, it’s not enough to have your own secret data proving it works, you need to be able to show it to others, transparently.

In short, the proof of the pudding is in the tasting. If no one can taste it – you get what I mean.

Prove it
Lisa Suennen, who has a good eye for healthcare investments, wrote a great article on health startups declaring:

“the digital health theme for 2017 should be: you show me the evidence it works, I’ll show you the money!”

In the article she points out the trends in health investment (less dollars for more companies), consumer trends (not favorable), and the value these health companies have provided investors (still to prove).

One area she discussed revolved around there being so many companies trying the same thing:

“I would love to see a lot less of companies that are “me too” and a lot more of companies with unique solutions to underserved problems.”

I have often mentioned that folks are focusing on the big three (obesity, diabetes, cardiovascular health) to the exclusion of other areas, such as poverty, access, mental illness, and addiction. How many fitness band companies can the market support? And why is it that none of them are making any headway?

But the article on the whole is about how investment in healthcare gadgets has seemed to be about claims and shiny devices, with little proof of effectiveness.

“I think that the convergence of IT and healthcare is here to stay and the trick is making it useful not cool. Trendiness does not equal value. Technology does not equal good.”

“I’d also like to hear some evidence of how all of this big data/AI/machine learning work is resulting in actual activity to change physician and consumer behavior, particularly around improved diagnoses and avoidance of medical errors. So far most of the talk has been about technology and too little of the talk is about results.”

Creative distraction
Eric Topol, a big booster for the use of digital tools to transform medicine, actually has a healthy dose of skepticism when approached by companies making bold claims. In a recent interview, not only does he raise his eyebrows in doubt, but admonishes Forward, a healthcare startup with a coterie of notable investors, to prove their methods and technology. He was baffled with all the PR glitz and saw some things that just don’t make sense, especially because he basically knows all the tech that’s out there.

“I would be firstly interested in what new tools they are using because are they proven, are they validated, are they well-accepted, and moreover I am particularly interested in publishing results to show that this gadgetry is helping these people,” he said.

What’s interesting to note, is that in the article, he also mentions his ‘prove it’ he gave Theranos’ Holmes when she approached tested him. He was impressed, but pushed her to do a head-head comparison with established tests.

“If you want to be an outsider and be a disruptor of healthcare you are still held accountable to the same standards of ‘You got to prove it.’ One of the things is that if you have technology that’s not proven, everyone assumes that it’s harmless but it could actually be harmful when you get incidental findings or if you come up things that are not true.”

Put the lime in the coconut!
I claim that none of this is surprising. Investors partly have wishful thinking. But also, they partly have no idea what they are investing in.

Theranos had that ‘maverick’ Jobsian feel to it, trying to disprove that “only good science, led by medical professionals, backed by data and able to withstand review by outsiders, can succeed.” At some level, that is true. I don’t think you always need medical professionals (don’t flame me). But you always need good science. As this article is kind enough to note through comparing Theranos’ go-to-market strategies with two others, you need to show evidence! Prove it!

If you are going to claim that your baby monitor catches SIDS, then it better. No wishful thinking can change the truth. And you are putting a lot of children at risk. Oh, someone already did this and the FDA isn’t happy.

If you’re going to be used by folks making sure they are not too inebriated to drive, you better be accurate. Oh, someone screwed up and is being punished.

If you’re going to claim that consumers want to measure their activity, you better be able to articulate why someone wants to measure their activity. Otherwise, you’ll not be able to last. Oh, FitBit isn’t doing so well.

Digital snake oil
This sobering reality is not recent. FT wrote about this early last year. And my skepticism with the use of devices in healthcare is well documented, for many years.

Smartwatches, activity sensors, whiz-bang care models that are more flash than substance – this is the new era of digital snake oil and the only way we can get through this is by having everyone transparently prove their value.

Note, I don’t mean to say all of this area in healthcare is digital snake oil (as others have claimed). But we all need to be vigilant and demand proof for every claim.

Let’s make 2017 the Year of “Prove It”.

What do you think?

Image from hirotomo t

Fascinated by fake news: AI, content, and being human

All the hubbub around fake news and the presidential elections got me thinking of AIs; how we find, review, and share content (a long term topic for me); how human trust and belief hinge upon millennia-old social strategies. And I’ve read a bunch of articles around how fake news has set off a bunch of navel gazing, soul searching, and finger pointing.

A BuzzFeed News analysis has identified the 50 fake news stories that attracted the most engagement on Facebook this year. Together they totaled 21.5 million likes, comments, and shares. Of these stories, 23 were about US politics, two were about women using their vaginas as murder weapons, and one was about a clown doll that actually was a person the whole time.

Source: The top fake news stories of 2016 were about the Pledge, poop, and the Pope – The Verge

Human in the machine
I like to consider myself a relatively seasoned netizen, one who has developed a few habits to fend off spam, phishing links, and disreputable content on blogs and social media. With respect to fake news, I’m a sceptic already, even questioning news from legitimate sources, so I think what’s between our two ears as regular humans is a good start for getting savvy to fake news.

Indeed, Kyle Chayka, in the Verge, has a thorough article showing aspects of fake news that stand out stylistically online. Alas, he also points out that these stylistic differences persist, partly due to Google and Facebook stylistically homogenizing all news in our preferred mobile interfaces, but also because the fake news providers do not see any benefit to improving their stylistic features. Though, this would certainly change if there were any positive impact to these stylistic features.

When enormous, undiscerning platforms like the two tech giants hoover up content, they disguise it, no matter the source. It doesn’t have to be that way.

Source: Facebook and Google make lies as pretty as truth – The Verge

With that ‘analytics between the ears’ sort of spirit, Google and Facebook think they can solve this problem (urgent for them, since they are the primary vehicles for fake news and fake news stylistic homogenization). Facebook has toyed with editorial boards, human moderators, and upping their objectionable content process to include fake news. Seems like Facebook is making a concerted effort to also include existing fact checkers, being more visible labeling suspect content, and tweaking their ad model to reduce click-bait incentives.

Facebook is inherently a human-based business, so good to see them including humans in the process of tackling fake news. Google, on the other hand, is the big SkyNet, AI in the sky. Their take on fake news is better algorithms, not always with a decent outcome.

Ghost in the machine
There is a business model behind fake news and changes to the playing field will lead to changes in the look and feel of fake news, so long as the business model supports those changes. Therefore, we’re in for an arms race.

Fake news fighters are up against determined and smart individuals who will eventually use AI to keep ahead of anti-fake news systems in a battle worthy of a Turing Test. For example, altering images is an old technique, but what happens when AI can make convincing image (and audio and video) manipulation happen at a large and overwhelming scale (smile)?

Oh, you say, but AIs won’t be able to write the fake news itself.


Already many legal notices, sport scores, and other semi-formatted content are being algorithmically generated for online publication. Even AI novices can create a content generator. I had mentioned before how there’s a whole field to computational literature and how we are applying AI to creative endeavors we think are uniquely human. What happens when AIs create fake news that look stylistically real, sound real, and claim things that seem real?

Evil in the machine
Fake news isn’t going away any time soon. Hackers will take over legitimate channels to spread fake news. There are a ton of very big elections coming up in Europe, and fake news is already rearing its ugly head. And while Facebook and the German government are working hard on the legal, professional, and technical aspects of combating fake news before the elections, how do you counter sources of fake news outside the legal structure, outside your borders? How do we filter the signal from the noise, separating the good from the bad? We struggle with this filtering even when sources are named and content is brief.

Us in the machine
I truly believe that the social skills we have to be skeptical, deal with claims, and understand information extend well into the online world. The challenge is to maintain the cues and context we are accustomed to use IRL and map them and make them useful online (I explored an aspect of this in my ramblings on noise posts almost 10 years ago).

The online world has scaled up our capacity to create content and communicate. What hasn’t scaled is our ability to grok it all in the way we’d do face to face. And the social ties that bind us, inform us, and provide context have been frayed or blurred online, making judgment calls even harder (just witness the echo chambers reinforcing themselves on Facebook).

To me, the break down we see with fake news is the gap between who we are as social beings and the tools we use online. The challenge is to take fake news as an opportunity for us to reassess what we do online, how we continue constructing the layer of humanity that is the online world, and how we use the online world as social beings endowed with certain unalienable social abilities.

Image by Christopher Dombres

Where were you 10 years ago today?

On Jan 9th, 2007, I was in London, at the IDEO offices, sitting in one of their conference rooms with a bunch of Nokians and IDEO-ans. I do not recall if we were streaming the audio or refreshing a page of someone live blogging from the iPhone launch event [Update: Matt Miz says live streaming.].

We knew the phone was coming. But it was a momentous evening for us, nonetheless. I think we all knew it was the death-knell for Nokia if it couldn’t match what Apple was bringing to the table. We also cynically shared what we thought the executives’ reactions would be. In those reactions was a hint of the fear and the hubris that Nokia Mobile Phones couldn’t overcome.

The reason we were at the IDEO offices was to design a new world, where the internet and the mobile were united.* We envisioned a time when we’d be online with our phones all the time, constantly connected to our people and their content. Nokia was to be the gateway, the interface to a collection of small windows we could peek through or step through, depending on how much we wanted to do. Holding all the morsels of our internet experience together, Nokia would be the essential brand.

But that future never came to be. Though I see elements of what we envisioned spread across the world today.

Ten years on, Google, Apple, and Facebook are losing a grip on their hegemony, much as Nokia did back then. Once more, the players mediating our experience with reality are changing as they offer us new ways to connect, create, share, trade, and transport.

We knew Jan 9th, 2007, would mark a deep line in our lives. Alas, I haven’t seen anything since that has had that kind of built up expectation, reception, and potential impact. Ten years from now, what will we all be looking back to in 2017 as the deep shift in the future as we thought it would be?

Image from Kim Støvring

*Indeed, we were there for the kick-off meeting for the project I was to lead, as it was my vision, for those who care to know, or who have tried to forget.

Select quotes from the conversation on AI between Barak Obama, MIT’s Joi Ito, and WIRED’s Scott Dadich

I finally got around to reading this interview transcript. Dadich moderates an interesting discussion between Ito and Obama mostly around AI, but also touching on related issues around the impact of new technologies on business and people.

Below, I’ve excerpted some of the more interesting things that were discussed. Please take these excerpts as a reflection of what I am thinking about and worthy of further exploration.

AI in general, AI in particular
Here’s a man who has to keep the whole world in his head, and he’s really articulate about AI – where it is, where it’s going, what are the impacts, what are the benefits.

There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? – Obama

Obama rightfully points out that specialized AI, is being used everywhere today. But the AI from sci-fi, generalized AI, and the AI that everyone fears, is a long way away. Nonetheless, having that broader fantastical view gets us thinking of the implications AI has in all aspects of our lives, especially as it comes to how we deploy specialized uses of AI.

And, like a president should, while Obama sees the “enormous prosperity and opportunity” AI presents, he is also concerned with the impact AI can have on jobs and wages as certain things are automated by AI.

Ito calls for AI to be really called Extended Intelligence. This is a great term to describe what I have said before, that AI should augment humans, not try to replace them. And indeed, many jobs that have been more cognitive will be disrupted by AI. How we choose the balance between full or augmented automation will impact those jobs and people.

Low-wage, low-skill individuals become more and more redundant, and their jobs may not be replaced, but wages are suppressed. And if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. – Obama

AI culture
I’ve had the sneaking suspicion that in the past 6 months, AI has gone from being in the background, to being front and center. As Ito articulates best, “this is the year that artificial intelligence becomes more than just a computer science problem.”

So it becomes important that the creation of AI have cultural and societal sensibilities. Yet, as Ito points out, “it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings.” How do we become more inclusive in adding values to AI, ethical AI? And what is the role of government?

Obama also mentioned that his concern wasn’t a runaway AI, but someone empowered by AI to do malicious things. Now the cyber security game just got more complicated. Interestingly, his view is not the usual ‘build a wall’ but the attitude of viral pandemics, a public health model – build a system that can rapidly and nimbly respond to an outbreak.

I think there’s no doubt that developing international norms, protocols, and verification mechanisms around cybersecurity generally, and AI in particular, is in its infancy. The challenge is the most sophisticated state actors don’t always embody the same values and norms that we do. – Obama

And where should AI research come from? Ito points out that a lot of AI research is coming from huge commercial research labs. Obama mentioned how these business want the bureaucrats to back off and let chase AI. But he then pointed out the benefits of inclusion of the public and the government in big technological advances.

I think we’re in a golden period where people want to talk to each other. If we can make sure that the funding and the energy goes to support open sharing, there is a lot of upside. You can’t really get that good at it in a vacuum, and it’s still an international community for now. – Ito

AI is all about automating intelligence. The industrial revolution was trasnformed with the automation of factories. AI will displace jobs, but, as Ito points out, “it’s actually nonintuitive which jobs get displaced.” We have already seen paralegal roles being taken over by text scanning systems. What will happen to lawyers, doctors, or auditors? How will AI take or transform their roles?

Both Ito and Obama talk about how these changes in jobs might require a redesign of the social compact – how to we value contribution and compensation?

What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. – Obama

We can figure this out
At the end of the interview, Obama mentions space exploration, which leads to using Star Trek as a guide for humanity’s future. Obama, ever the optimist, points out that Star Trek was not about science fiction but about values and relationships, “a notion of a common humanity and a confidence in our ability to solve problems.” He sees the spirit of America being “Oh, we can figure this out.”

Taking Star Trek further, Ito mentions, that the Star Trek Federation is “amazingly diverse, the crew is diverse, and the bad guys aren’t usually evil—they’re just misguided.” It is clear, this is an world the two of them are always working towards.

A thought
I am not surprised that these two great thinkers who have great hope in humanity, should gravitate to concepts such as cooperation, empathy, caution, and optimism. I, too, am an optimist, and have faith that the good in humanity will always prevail. Though, that faith require I remember to take a long term view, a view I am sure that guides these two men, and understand that there will be temporary moments of despair where it seems we are not heading in the right direction.

Geez, I wonder why I feel that way?

Go read the full article and see the video and let me know what you thought.


Fitbit buying Pebble will NOT help it crack the code on smartwatches

For some reason, discussions of smartwatches make me twitch. Maybe it’s because I got my first smartwatch over 10 years ago.* Maybe it’s because I’ve watched “the next great category” of mobile devices come and go or fling themselves repeatedly on the rocks of disappointment. Maybe it’s because I’ve played with sensors, data, mobiles, and wearables for a long time and have not seen anyone “crack the code.”

OK, call me a cynic and a curmudgeon. Yes, there are many others in the industry who (should) know more than I do. Though, I don’t see anyone really “getting it.” And, admittedly, I don’t like doing the “I can tell what’s not right” thing, rather than the “let me help you to the right place” thing of figuring out where the fusion of sensors, mobile, and wearable devices will head (though I do have many inklings).

A pebble in your shoe
The Fitbit CEO says they bought Pebble to help them crack the code on smartwatches. He says:

“We don’t think there’s been any product out there in smartwatches that combine general purposes, functionality, health and fitness, industrial design, and long battery life into one package.” [from: Fitbit CEO says buying Pebble could help it crack the code on smartwatches, The Verge]

Does he mean that not even Pebble has cracked the code? Because if Pebble hasn’t, then buying them won’t automagically impart the ability to crack the code to Fitbit.

In any case, the CEO of Fitbit is looking in the wrong place. I do not see anyone who has all the pieces in place to actually crack the code on smartwatches.

The future is here but unevenly and all that jazz
I mentioned I got a smartwatch over 10 years ago. It was a Suunto T6 (pictured), which connected to a heart rate band and a foot pod accelerometer (I didn’t buy the GPS pod because I already had a GPS pod for my phone). Suunto hasn’t stopped making, what they called, wrist-top computers. Nor have Garmin or Polar, Suunto competitors since that time. A good example of how these watches have evolved is, my favorite, the Garmin Fenix 3.

What lessons can Fitbit learn from Garmin, Suunto, Polar who have knocked it out of the park with smartwatches (ugh, “smart” is so stupid, can we just call then “watches” fercryingoutlout)?

True, Fitbit, and all the others, want to hit the large “consumer” market. Since Fitbit is obsessed with measurement, they mean “consumers” are all those people who don’t HAVE to measure themselves, such as the chronically ill, or those who aren’t DRIVEN to measure themselves, such as athletes or QSers.

Fitbit and peers seem to be proposing WHAT folks should measure and HOW. But their lack of success in getting traction with those consumers suggest that these WHATs and HOWs do not match the WHATs and HOWs that capture the consumer market.

By focusing on a driven segment, Garmin, Suunto, and Polar have been able to hone their offering to their customers and prove that watches with a lot of computing power and location awareness are something a segment of folks will pay for and keep using.

What’s the equivalent catch that will match what Fitbit and Pebble bring to the consumer segment with what the consumer segment really wants out of these devices?

Sales flop
We all know that these devices – from the fancy Fitbit pedometers to the expensive Apple watches – are not holding folks’ attention, especially when compared with the rabid attachment folks have for their phones. Everyone likes to track the sales of these devices that Fitbit and others are churning out. Why are we not talking about usage rather than sales?

Back in my day, Nokia wasn’t only bent on selling phones, but also thrilling the user so that the devices would drive up ARPU (average revenue per user, aka meterable usage) for carriers and a repeat purchase of a phone. Device success wasn’t just tied to sales, but usage and repeats sales.

What’s the equivalent of ARPU for Fitbit, Pebble, Apple, and others? What’s the churn? What’s the repeat purchase for subsequent models?

Can someone find me those metrics?

I’m not the first to grumble about this. There are been articles (here’s one from 2015) and analyst reports (PDF report from 2014), on churn and usage for some time.

Why don’t the vendors report these metrics?

Wooden strategy
The Palm had its humble origins in a wooden block that the designer carried around to capture how he would use a mobile handheld computer. Who is doing the equivalent to Palm and the wooden block, but with watches?

I have no idea how Fitbit and others are actually designing their mobile devices. But from what I see, there are folks approaching wearables from the device and sensor perspective, pushing the product promise around steps, accuracy, sleeping measurement, heart rate sensors, and so forth. Another group seems to approach wearables from the data perspective, focused on showing data galore to users.

The answer lies somewhere in between, where the success of wearables will be in the fusion of data, devices, and, most importantly, in how the user experiences that data and those devices. Hence, nobody seems to have the right go-to-market approach. Most of the vendors focus on the data and the device, the apps and developers, not the core human need that would get someone to buy it in the first place, that is, a need that is relevant to the general consumer.

Taking the measure of Fitbit
Fitbit (and I feel all the others) are using measurement as the main draw of all their watches and gizmos. Measurement is what folks who are DRIVEN or folks with chronic conditions HAVE to do. But is that what the general public wants out of a device they carry with them everywhere?

Fitbit, based on the quote above, has missed the trick if they want to get into the general consumer world of watches. And I know what happens when device manufacturers can’t think beyond their device features.

If you want to make a digital device on someone’s wrist absolutely essential, it’s not going to be due to wiz-bang sensors or measurement, or fantastical dahsboards or indicators of my steps or fitness.

A digital watch will be essential when it helps me work better, be better, communicate better, know better, feel better, get through my day and relationships better.

Ah, of course, we already have that digital device – it’s our phone.

My challenge to you
Put a frakkin’ wooden block on your wrist. Tell me why you look at it or want it to do something as you go about your day. How does it complement the things you carry, such as your keys, phone, and wallet – the things you check for before heading out the door, the things you would turn around for and go back home to get?

If anyone is studying this, let me know. If you think I’m full of krap, let me know.

Until then, I’ll be a curmudgeon, twitching every time someone thinks they can “crack the code” around smartwatches.


*Hey, if you go “WTF, the T6 isn’t a smartwatch.” Of course, by today’s expectations.  That’s like saying the PowerBook 100 wasn’t a laptop.

What is 777labs?

777labs_consultancyFor the past 20 years, I have been helping folks in marketing and sales identify, target, build, and nurture customer relationships, market opportunities, and brand growth. I have either led or heavily influenced sales strategies, marketing efforts, or solution design and development, giving me a unique perspective as to how strategy and execution cut across key areas of an organization and affect their customers.

My goal is to make this experience available through 777labs. I want to help my clients build an engagement strategy, whether the customer is another business or a consumer of a service or product. And I want to help build the content that enables the client to deliver on that strategy, be it sales content to provide the sales staff competency and credibility, or clever tweets and blog posts.

This is what I have been doing for decades, and this is what I enjoy doing.

A list what I offer
Marketing: Digital marketing strategy, Content strategy, Social media strategy, Marketing strategy, Marketing content, Brand building, Marketing analytics, Community management

Sales: Customer engagement strategy, Sales strategy, Sales content, Sales training, Sales analytics

Solution design: Mobile service design strategy, Web service design strategy, Product and solution marketing, Solution design strategy, Data enrichment strategy

Healthcare, in particular
While I can do these things for companies in practically any industry, I’d like to focus on one industry I have extensive experience in: healthcare. I’m particularly interested in providing guidance to clients who are not traditional healthcare companies, but who are building a healthcare vertical or are interested in figuring out how to enter the healthcare market.

Contact me
If you are a company looking to take your product or service into healthcare, or you want to grow your digital health or patient engagement activities, 777labs can help. You can contact me, Charlie Schick, at

Pause for station identification

“I will find a way or make one” – on my Harvard University chair kindly given by Gary Silverman on my departure from his lab

Through the years, each of these pauses have been a definition of where I am at in that sliver of time. Alas, currently, I’m exploring a few potential paths, so defining where I am in this sliver of time is important to me.

So here we go.

Hello. My name is Charlie Schick. I’m passionate about the intersection of healthcare, mobile, and data; particularly how we can improve the way healthcare organizations engage with customers, patients, and families. I also advise companies on mobile, marketing, and analytics.

I have 20 years of experience in engaging with customers through various roles in marketing, sales, solution design and development, and research at major brands, such as IBM, Nokia, and Boston Children’s Hospital. Also, I have been influential leading these major brands with innovative ways of engaging with customers, particularly through digital solutions.

What I’m doing now. Again.
My first gig out of the lab was my own company, Edubba, providing editorial consulting – running proto-blog sites, being a columnist for some magazines, providing wordsmithing for product reviews and marketing material.

That independent effort quieted down when I moved to Nokia, though I did keep working on the side – writing feature articles for organizations, a biz plan here or there. The bulk of my writing and strategy work in the past 20 years, though, has really been corporate –  the Beagle; Hello Direct; the Nokia Cloud project; the Nokia corporate blog; Children’s Facebook page and blog; sales consulting and occasional writing for IBM; trying to make a difference at Atigeo.

Now that I am on my own, again, I’m going back to my first job out of the lab. I’m launching a new consultancy, 777labs. This time I will have a broader scope than before, tapping into my many years of experience in the corporate world, and relevant to where I want to make an impact.

777labs is a customer engagement strategy consultancy helping clients identify, target, build, and nurture customer relationships, market opportunities, and brand growth. Our services cut across sales, marketing, and solution design strategy and also include the necessary tools, analytics, and content development. Our primary focus is in healthcare, including providing value to non-healthcare companies who are entering the healthcare market.

I’m excited to get back into leading this work full-time, for myself.

Thinking and speaking and helping
Beyond the new consultancy, I want to continue giving talks and running panels. I regularly speak in front of large audiences, sharing my experience and interests through various forms of media and design, and in the office of CxOs. Send me a note if you want to know more.

And of course, my standard disclaimer
(riffing off of an ancient Cringely disclaimer)
Everything I write here on this site is an expression of my own opinions, NOT of any of my clients. If these were the opinions of my clients, the site would be called ‘777labs’ client’s something or other’ and, for sure, the writing and design would be much more professional. Likewise, I am an intensely trained professional writer :-P, so don’t expect to find any confidential secret corporate mumbo-jumbo being revealed here. Everything I write here is public info or readily found via any decent search engine or easily deduced by someone who has an understanding of the industry.

If you have ideas or projects that you think I might be interested in please contact me, Charlie Schick, at; via my profile on LinkedIn; or via @molecularist on Twitter. And if you’re interested in working with 777labs, you can contact me at