For me there is something really powerful in drawing freely with a pencil and not being constrained by the way tools (Figma, etc) make you think.
Anyhow side project is here if anyone wants a quick way to do wireframes and add in logic, components, etc - https://roughups.com
Specific lesson on using data here https://roughups.com/learn/boxes
If one could write generic code, and if the properties of the inked objects were exposed to the runtime (i.e. corner points, stroke/fill colors, rotation, etc.) your app would be really close in functionality to the linked project.
My ultimate aim is to get it so that you can actually publish a sketched app to the App Store… whether apple would accept a scribbled app or not I don’t know!
Since you have a whole section on learning:
One thing I've noticed is that, at least on physical whiteboards, most people I've worked with weren't comfortable with their handwriting and drawing skills. Even when everyone agrees that "rougher is better."
For them I used to do a quick hands-on workshop back at Pivotal Labs. It was called "Whiteboard Hacking" (please don't judge ;) for which I also published a free small manual.
The manual with a bunch of exercises is still here for anyone who feels overly critical of their handwriting and drawing skills: https://publish.obsidian.md/alexisrondeau/Attachments/Whiteb... .
> Building larger, more technical software systems in Inkbase becomes extremely difficult for many reasons, from the poor ergonomics of typing with an on-screen keyboard
Nobody has really solved the ergonomics problem of being able to type on a keyboard and also sketch, and have the entire system be portable and friction free.
Though I cannot imagine it working within the confines of the keyboard HID API.
With non-defective keyboards (and non-spazzy hands), the key presses on a keyboard are non-probabilistic; that's not the case with handwriting, where a large amount of the information regarding how to interpret text comes from the surrounding characters - including those that follow.
You'd need to deal with the shifting probabilities of text input, and without introducing user noticable latency or triggering an excess of events.
It doesn't sound impossible, but it wouldn't be easy either.
My experience with every hand-writing recognition facility since has been it has got worse, more opinionated, and is probably tied too tightly to grammar and spell checking - rather like what I call "destructive texting" when predictive typing on mobile devices constantly auto-replaces words after I've typed them without me noticing until a message is sent!
 I still have it although not powered up in a couple of years!
Whiteboard use in real life is a combination of speech and sketching.
Writing on Obsidian and sketching on excalidraw (both with mouse and stylus) and inserting the resulting image via MD is a nice workflow for me. I like it.
The ones that fit as part of a case seem ideal.
Might not be as portable, but certainly more comfortable.
1. If I put the pad on the right of the keyboard, then I can type normally, but I am writing at an awkward angle. If I put the pad right in front of me, then the keyboard is too far from me, and can't type fluidly. There is this continuous tension about where to place them.
2. Pads use bluetooth or wire to connect. If I use bluetooth, then my headphones must be wired. So there is a tradeoff there. Headphone are necessary for meetings.
3. There are the non-screen sketchpads, and the screen ones. The non-screen ones require a lot of hand-eye coordination (because you have to look at your monitor while drawing on the pad). They are not as stress-free as paper. The screen ones would help a lot more, but now you have two screens showing the same thing. The pad screen to draw things and move them around with your hand/pen, and the monitor to show you what you type. Kinda stupid.
Well, why not use just a ipad-like-tablet and both draw on it and type with a keyboard. Because drawing requires the surface to be flat, and keyboard require the screen to be verticalish so you can easily see what you are typing.
Any experience with https://drawpile.net/?
But this need some notion of pages or slides, so we can progress through the calculations, and later export to pdf.
Recently entered the iOS/ipadOS world and still having a lot of trouble adapting, but are you suggesting that the ipad can only connect to one bluetooth accessory at a time? Or that the bluetooth connection become unstable when too many devices are connected?
It gets worse since touch screens are actually pretty imprecise so I end up using a 3D mouse for viewport manipulation and a regular mouse for precise navigation (nested menus!) on top of the pen and keyboard. At least one of the input devices is considered lost at any given time.
Visual scripting could be a good start. Unreal's Blueprints are really good with their UI and they provide plenty of usability. Perhaps if you could draw your own nodes with the properties you want, this could create the missing link for a truly hand drawn visual programming language.
On the other hand, maybe a specialized programming language optimized for that could work. The blueprints idea seems like a good concept to start with.
A full programming language would be interesting but pretty alien. As someone who for whatever reason tends to end up with lots of super/sub/subsubscripts (sometimes with multiple dimensions in each!) — variables with, like, more than 6 letters are basically a nightmare when writing by hand. I can’t imagine writing at least with typical variable names, by hand. Although maybe a programming language that looked more like prose would be possible.
I heard mathpix is basically that, math via ocr to solution (haven't used it personally).
This is helped by the fact that inkbase has a domain-specific language, which does not have to be generally expressive, and where often short snippets are to be written.
Despite of how much computers have revolutionized every aspect of our lives, they're awfully primitive at being educational. Think about it: our best online knowledge resource is the equivalent of a classic encyclopedia, with static text, and some multimedia features. There are plenty of free lectures available, but they're limited to video recordings and text. Content is rarely interactive, or presented in a way to guide learning via experimentation. There are some worthwhile resources that do this right, but they're either not very accessible or mainstream.
Imagine where we could be as a civilization if operating systems were built with education in mind. I'm not smart enough to design such systems, but having seen what some of the brightest minds in our industry have come up with decades ago, I can't help but feel underwhelmed, to say the least, by what we have today.
Your post brings 2 thoughts to my mind:
1) I disagree with your characterization of "our best online knowledge resource". I assume you're talking about Wikipedia, sure that's one way to characterize it (albeit not the most genuine IMO). But Wikipedia is a tiny part of the picture; we also have YouTube (with millions of creators from all cultures/languages/fields), Khan Academy, the Internet Archive, your local library, and countless other resources. That's the beauty of the internet: it's decentralized. No centralized service can fulfill every single knowledge need because knowledge gets formed when an individual has to integrate multiple sources of information and reconcile it with their experience of the world (sorry, fans of Young Lady's Illustrated Primer; it was always a literary illusion). Otherwise it's not much more than just regurgitating propaganda.
2) Ultimately, software doesn't really matter for education. What matters is that kids have a physically safe (heated/cooled, with access to food, clean restrooms, etc) facility, and are under the guidance of teachers who are in an environment conducive to quality teaching (eg classes of reasonable sizes, administrators that don't micromanage them, etc). I've seen this "we just need the right interactive software to fix education!" fallacy so many times amongst tech people. Sure, in the hands of competent teachers who have the above, tech & software can enhance learning. But it's not the fundamental piece. And many, many schools (in the US and in many "developed" countries) lack those fundamentals. If you want to "imagine where we could be as a civilization", that's where you have to improve things.
Sorry, I know "it's not about the tech, it's about the people and basic physical environment" is a terribly boring answer for a hacker. But spend some time teaching, or building fancy software that ultimately doesn't do much for student outcomes, and I suspect you will observe the same.
> Wikipedia is a tiny part of the picture; we also have YouTube (with millions of creators from all cultures/languages/fields), Khan Academy, the Internet Archive, your local library, and countless other resources
That's precisely my point. YouTube is audiovisual, something we've had since the dawn of television. Sure, the content has exploded in diversity (which has its own problems of curation), but it's still a fundamentally consumption based, one way, zero interaction, learning experience.
Khan Academy and other MOOCs are also audiovisual and text based, and at best, have communication capabilities with the instructor and other students. This is attempting to bring the traditional school experience to the virtual world, and is hardly revolutionary.
The Internet Archive, while vast in content, is still only audiovisual, and has the same consumption and curation problems.
If we settle for computers being a mere portal to traditional resources, then, yes, there is vast knowledge to be acquired. And this has been invaluable for many people, myself included.
But computers are capable of so much more if we step outside the boundaries of traditional teaching. They allow content to be interactive, linked, remixed and experimented with in ways that weren't possible before. For the developing mind of a child, this allows entirely new ways of making knowledge accessible and entertaining. Sure, there have been many attempts at this, and the edtech sector is huge, but none of this is fundamental to how computers are used. At the end of the day, most children will be more drawn to endless media consumption than an educational app, and our OSs are optimized for the former rather than the latter.
It's difficult to imagine what such a system could look like today, or the repercussions it could have on education, but I find Alan Kay's ideas very visionary in this sense.
> Ultimately, software doesn't really matter for education. What matters is that kids have a physically safe (heated/cooled, with access to food, clean restrooms, etc) facility, and are under the guidance of teachers who are in an environment conducive to quality teaching
Sure, this is the traditional school environment. While that certainly helps, and it's a crime that teachers are so underappreciated and underpaid, that system is broken in many ways. From uninspired teachers that fail to raise interest in learning, via teaching practices that encourage memorization of concepts rather than curiosity and experimentation, to the idea that everyone learns the same way or at the same pace, to corruption, bullying, and the list goes on and on.
We have the technology to revolutionize every aspect of our lives, yet when it comes to education, we're still relying on traditional methods. So I disagree that software doesn't really matter. All the software _attempts_ we've had so far haven't made a breakthrough, but the potential is there.
As examples of the good kind of software, take a look at the articles by Bartosz Ciechanowski. Or in a more commercial sense, brilliant.org. These are isolated examples of what computers can do, but imagine if the same operating system you're reading this on fundamentally worked in a different way, to allow free-form experimentation with concepts in ways we've never seen before. Then imagine that system connected to the modern internet, where billions of people are doing the same thing, and what that interaction could mean for developing new ideas. This goes beyond simple screen and document sharing that we find sophisticated today. And, in many ways, it's unimaginable precisely because it's so far from what we can do today.
I get your point about YouTube videos or Wikipedia pages being "more of the same", in a way. There's a kernel of truth to it, but I don't think that's entirely fair - just the fact that Wikipedia has hyperlinks makes it already immensely more valuable as a tool - and of a fundamentally different medium - than a traditional encyclopedia. Same for animated/dynamic/interactive graphics that are trivial to embed in a webpage but impossible to represent on paper or on a TV screen.
But point taken, it feels like computers somehow ~should~ enable something more radical. When pushing for the Macintosh in education in the 80s, Steve Jobs used to talk about how computers might one day enable students to ask questions directly to Socrates about what he wrote and have them answered, rather than not being able to engage with a text beyond reading it. I think that's what you're getting at.
Now what is really interesting about this is that it gets to the core of what we (think we) want from teaching - having a perfectly patient interlocutor to whom we can ask questions, who can clarify misunderstandings, guide our attention, etc. Maybe this is something that we'll be able to build given recent breakthroughs around language nets. I don't think we're anywhere near but it's an interesting lead for sure. A "Socrates chat bot" that could meaningfully answer questions and clarify confusion about what he meant would be very impressive.
Bartosz' work is utterly fantastic, and is part of a broader movement termed by some as "explorables" (https://explorabl.es). Explorables also (unsurprisingly) have their roots in the early days of computer science, where a handful of computer scientists and educators saw the formidable synergy between constructivist approaches to pedagogy, and software ("Mindstorms" by Papert is a seminal text here). There's plenty of cool work in that field from the last half century (which I contributed to in my own minute way when I was in grad school). That general idea - give students computer models to manipulate so they can intuitively develop a mental model for things! - has lots of work behind it.
But does that work effectively scale out, and translate to better student outcomes at a societal level? I don't want to say it's a big resounding no, but... it's not encouraging. We're certainly way past the optimism of the early 2000s OLPC when we thought that all we had to do was give students a laptop loaded with educational software to "fix" education.
To follow one of your points, the explorables website linked above has hundreds of them listed - if they could just be handed off to students and suddenly dramatically improve outcomes, teachers would certainly be doing that.
So we're back to our original question - if this is all stuff we've been doing and exploring for half a century, why isn't it more widespread? Why hasn't it meaningfully improved our issues?  Is it because we haven't done enough of it, because we're missing some key insights? Or is it because maybe it doesn't solve the problem in as a fundamental way as we would hope?
And that's where I tend to fall more into the latter camp - education is fundamentally a social process, learning environment matters a lot, students are not always going to be receptive and the role of the teacher is also knowing how to handle that. An adult can tell you "stop screwing around" in a way that computers (or an ideal Socrates chatbot) can't - that's also "education".
A 3rd grade teacher's biggest challenges lie more with keeping their (often oversized) class focused, teaching all the points they need to get to, trying to have a meaningful impact - any impact - on the students who come to school hungry or improperly clothed or fundamentally opposed to learning anything  because of a crappy family situation - than a need for more interactive materials.
Here's a nice short video I recently saw of a great math teacher in action; I encourage you to watch it.
The value of the teaching here is not so much the content, which could easily be summarized in a sentence and few pictures. The pedagogical value here all comes from the teacher, and how she manages the class, gives everyone a voice, reinterprets their answer in the context of the original question and what she wants to demonstrate, etc.
My question to the reader: do you think this little math exercise for middle schoolers would be as effective as a webpage, no matter how interactive? Or does its value come from the fact that it is a social, embodied, cooperative process?
: One thing that has been repeatedly demonstrated to raise student achievement: giving out free lunches. https://www.maxwell.syr.edu/docs/default-source/research/cpr...
: Here's a fun one I've heard from French teachers: male muslim kids who openly defy female teachers because they were taught non muslim women are not supposed to be sources of authority.
In fact, most of the things shown in those demos can be done today - perhaps in more narrow ways, but fundamentally we have collaborative document editing, video chat, complex live drawing tools, multi stream video editing, handwriting recognition, etc etc etc. All operating at scales that could only be dreamt of in these early days.
What it brings to my mind though is that software doesn’t exist in a vacuum. Writing software requires many hours of human effort, maintaining it even more so. Sustained, focused, organized human effort requires funding of some sort . Software exists to solve problems, and in our globalized capitalistic economies, it means the value of software does not lie in reaching some paragon of pure academic composability/extensibility, but in solving concrete problems for people while meeting some arbitrary costs/tradeoffs.
This is why those “tools for thoughts” demos seem to always rehash the same ideas and get stuck circling around the same drain that Engelbart & Kay & friends charted 50 years ago; in the meantime, some industrial company you’ve never heard of is paying a few consultants big bucks to come up with “boring” Excel spreadsheets that are just as much “tools for thought” as anything else that humans use.
Now, am I satisfied with this state of affairs, and would I love to see what models for writing and maintaining software could exist in a non capitalistic culture? Absolutely not, and absolutely.
But that seems to me to be more the root cause of why we’re still chasing the Engelbart mirage over half a century later, rather than some fundamental/conceptual “progress” to be made.
 and speaking of funding, it is interesting to look at what funding environments those open ended “tools for thought” projects tend to come from; more often than not academia, or in the case of ink and switch, an independently wealthy PI. Places directly connected to money making ventures, like Xerox PARC or MSR, are short lived, and few and far between.
That's not a coincidence, as modern OSs were inspired by those demos. But isn't it a sign of a lack of progress the fact that the pinnacle of modern technology is the fact we can do the same things shown 50 years ago _slightly better_?
Why haven't there been equally revolutionizing ideas in HCI since then? We have better screens on smaller computers, perfected tapping on glass and haptic feedback, but what we can do with all this technology is awfully limiting.
Add to that the invasion of advertising in every facet of computing, perversing incentives for companies to develop technology that benefit humanity instead of exploiting it, and in many ways we've regressed.
XR seems to be the next step forward (itself not a novel idea either), but so far it seems that it will be ruled by the current tech giants, which is far from enticing.
This was implied, but thank you for capturing the spirit succinctly.
I've got an old Mac II ci that I picked up at a thrift store for $12 that came with an old version of Photoshop and an ethernet card that I keep around to remind myself of how little we've progressed. For a machine manufactured in 1989 with performance measured in MHz, it may be noticeably slower, but that's not the point. It's that we're doing the same tired twentieth century stuff we were doing 33 years ago, just slightly faster.
XR does seem a path forward, but the so-called giants you mention are selling subsidized prototypes you can't even take outside without a stern warning that you may brick the device. It's been close to a decade since the DK1 came out. You'd think they'd be past the point where legs are a new feature.
Through the early 1990's, hardly anyone was doing electronic file transfer regularly in their personal workflows. While there were many examples of phoning in remotely to be updated or do certain kinds of work, it was a per-industry thing. The larger changes finally came to pass only as email and office networking gained widespread adoption. So...you didn't need computers everywhere in everyday life. They were a nice addition if you were writing frequently or you wanted a spreadsheet, but the net outcome of that was that you could run a smaller office with less secretarial staff - and not a lot more.
In the 90's and 00's, the scope expanded to cover more graphics and messaging workflows. But it was still largely 1:1 replacement of existing workflows in industry, with an import/export step that went to paper. And when you have the "go to paper" bottleneck, you lose a lot of efficiencies. Paper remained a favored technology.
It really wasn't until we had smartphones and cloud infrastructure that we could rely on "everything goes through the computer" and thus start to realize Englebart's ideas with more clarity. And that's also where the "social media" era really got going. So it's like we've barely started, in fact.
What all the prior eras in computing were like were a kind of statement of "it'll be cool when". The future was being sold in glimpses, but predominantly, the role of the computer was the one it had always had: to enhance bureaucratic functions. And the past decade has done a lot to challenge the paradigm of further enhancement towards bureaucratic legibility. In the way that urbanists joke about "just one more lane, bro" as the way to fix traffic, we can say "just one more spreadsheet, bro" has been the way we've attempted to satisfy more and more societal needs.
But there is a post-Engelbart context appearing now: instead of coding up discrete data models, we've started strapping machine learning to everything. It works marvelously and the cost of training is a fraction of the cost of custom development. And that changes the framing of what UI has to be, and thus how computers engage with learners, from a knobs-and-buttons paradigm to "whatever signals or symbols you can get a dataset for."
Only Apple, Google and Microsoft seem to care about pushing consumer OS experiences forward, and unfortunely they always do two steps forward, one backwards, every couple of years.
On a computer the size and weight of a paper notebook, with all day battery life, a display whose color quality/resolution/refresh rate were utterly unimaginable 33 years ago, I can have dozens of layers that are up to tens of thousands of pixels in edge size, use advanced AI to generate textures like grass/clouds/etc. or segment arbitrary objects from the background, recomposite all that in real time, etc. etc. etc...
If you haven't used a computer since 33 years ago I highly recommend doing so.
If your argument is that we're still making 2D pictures, well we've been making that for a few thousands (if not tens of thousands) of years. If you want to be making weird experimental 3D/4D/nD VR/AR/xR art stuff there's lots of great tooling for that too (but it won't run on your 1989 Mac...)
Those were completely imaginable 30 years ago to anybody with a physics background. It was just a matter of time before transistors shrank down to the nm range (of course with the enormous amounts of engineering work that made it possible, but there was no physical reason it couldn't be done).
Not quite the same but there was even a commerical vector illustration app that tried to add a bunch of "smart" programmable features back in the early 90s. It failed: https://www.google.com/search?q=intellidraw+adlus
Of course that doesn't mean someone won't get it right and compelling eventually. Me, I'd guess it would take some serious ML and maybe voice/gesture recongintion for it to really work for more than a few geeks.
Thanks for sharing!
To be honest, this is something I'd be willing to pay a lot of money for...
Also, I can imagine something like this would be very valuable when applied to language learning, especially for languages with ideograms
And since I noticed the authors are possibly reading...
This is simultaneously the most amazing thing I've seen all month... and I watched people send a rocket around the moon ... amazing and yet deeply deeply frustrating to read.
This is the same deeply frustrating, irritated feeling I have when I'm searching for answers to a problem and read an academic paper that talks about some algorithmic innovation or software improvement that does one thing or another that might help me solve my problem, and the entire paper is about the process and the results, with a tiny summary of the changes, and not a damn line of source code... and it really grinds my gears. It feels like I've been jerked around, my time wasted...
But through all that negative emotional stew, these two apps... Inkbase and Crosscut look positively magical, I wasn't kidding, its the most amazing thing I've seen in the last month, and possibly all year... and the idea that the authors appear to have no intention of turning either of them into actual products... that all this will (unlike Tydlig) be impossible to show other people face to face... how computers and computing can be different than spreadsheets, math, and computer code... Having read and now re-read both pages, I saw no clear reference to the future of either project, beyond highlighting what interesting aspects of future work in other projects they have taken away from each of these...
At the end of the day I know that magical things like this often on arise because the developers have complete freedom, the ability to design everything without any outside pressure, no product growth to worry about, no user feedback to answer to, no help document to write... but at the same time, every time I come across something like this, which appears so complete, so far progressed towards being a product, but just put on the shelf... It fills me with weltschmerz.
So now I have a brand new copy of https://museapp.com/ ... some weltschmerz... and the rest of my days work ahead. :-/
It's hard to predict where this work might eventually lead (that's the nature of research) but I will just say that we continue to explore the space and we have another piece we'll be sharing soon.
It’s not a hardware problem — pretty much these same demos could have been done (we did some of them!) in 1990 with much more primitive devices. It’s more that we haven’t found the underlying model to build on to get beyond simple demos.
I wrote a much simpler demo back in 2018 using the Apple Pencil + OCR to compile Swift code: https://twitter.com/NathanFlurry/status/980501243377344512
It's essentially just a glorified Swift AST and in no way competes with the efficiency of a traditional keyboard, but 48h of hacking scratched enough of that itch I had to build a VPL.
However I still think they'll have to pry my pencil and paper out of my cold, dead hands (:
Maybe I'm biased, since I've been drawing a fair amount from a young age, but as I watch these videos, and with every other technology like it I've seen, I can't help but think: "Oh, that would start to annoy me pretty quickly."
Maybe someone who grew up with a technology like this, and had a Vim-like relationship with the system (so they knew with confidence how it will dynamically act) would be able to do some really impressive stuff, though.
The RSS feed is a good long-term bet, and in the past we have posted on Twitter at @inkandswitch.
As someone who loves working on paper to design the structure of a system, the conversion to boilerplate structure has always been a laborious structure. Any way to move from a drawn UML type diagram to a basic class structure would be an improvement.
But no doubt I would try to see how it goes. Great work !!!
It's at the top of the article. It's even bolded to catch your eye.