One thing that I love about it is the grip I can use - it's the "precision" grip that the author discusses in the article, and it gives me a lot of fine control over what I'm doing.
The other day, I saw the haptic smart knob on Hackaday . It's a force feedback rotating knob with software defined detents and boundaries.
If someone could combine the SpaceMouse with the smart knob, I think the resulting multiple DOF force feedback controller might just be the input device of the future.
In the demo they had set up, there was a 3D model of a car, and you slide the tip of the pen along the surface. The haptic feedback would stop the pen from penetrating into the model so it really felt like the tip was touching the volume of a solid.
Super cool experience.
Note that Thomas Massie, US representative and climate change denialist, was the founder of Senseable.
The conclusion reminded me of an interesting point they mentioned during the demo: the haptic feedback ran at 1,000 FPS! We have much finer-grained expectations around time for touch than we do sight.
It got first party support in a bunch of Valve games and a decent number of games had unofficial support through mods but it never really took off and I think the company now makes medical tools with the same sort of tech.
The lack of feedback in input devices beyond a little rumble is really disappointing. I think feedback is extremely important especially now that VR is starting to actually take off.
It was limited in that it gave force-feedback only against the sphere at the tip. The orientation of the pen could not be restricted. We attempted to simulate feedback on more DOF through leverage, but it was lacking: like using a mouse to simulate a steering wheel in a driving game.
I'm a UI/UX designer who was historically very invested in the Surface ecosystem during the early Win8/10 era, so this was a day-1 curiosity purchase for me.
As a physical object, I like it a lot. It's not _perfect_, but it has a great weight, the Dial has a satisfying resistance (like a 70s stereo knob), uses AAAs, and the haptic feedback is solid for a digitally-triggered vibrating motor (vs a literal ratchet).
What I find, however, is that it's a superior consumption tool vs creation tool. It's at its best when being used as a single-function knob. Great for sitting at a desk while reading a whitepaper, or perhaps controlling volume while listening to music. Day to day I use a mouse with a stepped-wheel, but the smooth scroll on the Dial makes it my preferred way to handle longform content. Keeps a long article flowing.
However, it has not become an indispensable part of my workflow. That may just be me. I need to pivot between Windows/macOS/Linux over the course of a day, so a lot of its proprietary-tech promise is wasted and I've built fewer habits. I'm also not sure offhand how good the integration is with design tools after the initial fanfare around Adobe, Surface Studio, etc.
Other tricky part: It's bluetooth. Once it's awake and in-use, that's...acceptable, if you aren't actively looking for tiny bits of input lag. But if you need it on-demand (sudden noise from the speakers, etc.) and you haven't used it in a while, it may be a couple seconds before it's responsive to that first input.
Still, it _is_ intriguing enough to me that years-on, it's still on my desk and is one of the only bluetooth devices I own, much less tolerate using. I think it's at it's best when it's a context-sensitive linear actuator with some light multifunction capability vs a new paradigm.
But I have to admit "press and hold, then rotate..." makes me think of child-proof bottles. So with a touch of arthritis, it might be a love-hate affair.
Here's what they looked like:
There are plugins for using generic MIDI controllers (for musical instruments) that have knobs, sliders and buttons. Then there are also dedicated controllers that do basically the same thing, but with specialised labels on the controls.
The device has been used as a video game controller by someone playing Elite Dangerous. It was also used as a robot controller by NASA.
My one caution is that it’s quite hard to isolate control of individual axis. This is fine in CAD - if you accidentally move a tad in the Z axis while you’re trying to rotate, it doesn’t really effect anything. But depending on what you want to bind the mouse to it might make it really unusable.
I also like to turn on "Do not automatically tilt while zooming". I don't recall if that affects SpaceMouse, but I don't like the default automatic tilt when using regular mouse or keyboard navigation.
Also if you are using a ThinkPad, be sure to try each of the three mouse buttons (hold one down and move the TrackPoint around) when not using the SpaceMouse.
A device like that could be adapted for dev.
I don't do cad work, but I'm tempted to get a SpaceMouse for the sheer coolness factor.
Part of me wonders if it's just that people don't know that this is solved on the web, so make sure to go here and try it out and make something if Brets article appealed to you: https://google.github.io/mediapipe/solutions/hands#javascrip... + https://codepen.io/mediapipe/pen/RwGWYJw
Tech companies need to prove they can limit themselves to using data for the purpose for which it was requested, then we can talk about whether or not I'll give camera access.
It didn't really work out though. Long story short: my arms got tired. Turns out that it's a kinda fundamental problem with how they had designed the interface - hovering your hands above something for extended periods of time is simply just tiring and uncomfortable.
I guess they did not think about that.
We've got an open source library for mobile hand-object input [https://portalble.cs.brown.edu/], and the version with Leap Motion is really nice, but doesn't directly work with a phone (we had to pipe data through a compute stick to make it work).
I'd love to see MediaPipe Hands match LeapMotion precision some day, but I'm not even sure if it's possible. A real depth sensor goes a long way.
Anybody remember the 2008 Blackberry Storm, the company's iPhone competitor? It had a "clickable" glass panel, to bridge the UX gap between the tactile KB BlackBerry phones and the touchscreen-only Storm. Touching an icon required tapping the glass panel, which had some give, making it feel like a big button.
It was far from a perfect experience. The touch events all worked, but I always felt that I was about the break the screen with each successive touch.
* https://dynamicland.org/ - Bret Victor's vision, looks really cool
* Kinect was released (November 4, 2010) a little before this article and presented another vision of future, but the market didn't think so
* Oculus now detects hands and I'm pretty hopeful this will add more gestures and similar gait detection will be huge for interfaces
All in all, the incremental changes are starting to look more like what Bret is suggesting rather than purely "pane of glass"
Source: Andy Matuschak mentions it in
One thing which comes to mind is that Dynamicland is a strange laboratory. It was a space in Oakland that is no more, but it's a physical environment where the primary activity being undertaken was creating this very unusual computing system.
And in fact, that's exactly what the principal investigator is doing right now. He's picking up and relocating the work to very interesting synthetic biology lab, where maybe now that the further development of the system will happen in a way that's meant to support this professor's research.
1) Bret brought the computer into the world, instead of bringing the world into the computer, e.g. Oculus or Vive.
2) The operating system that senses the world and reads instructions from objects is influenced by Smalltalk, and from what I understand allows for Smalltalk like programs to run on it in the form of object instructions and interactions.
The Kinect has pretty much dried up for video games, but the company that developed the first version of the Kinect for Microsoft was later purchased by Apple, and their technology underpins the FaceID tech that appears in every iOS device these days.
(Apple has also had rear-facing Lidar on their iPads & iPhones for a few years now, and I believe that it is also an evolution of the Kinect tech, but I don't know for sure.)
I am disappointed that it withered away for video games, since it was really interesting & fun technology.
But we don’t allow that, because we are (rightly) worried that doing so would give all our private and sensitive personal information to greedy companies and invasive governments.
The future of computer interaction depends not on better hardware or algorithms, it depends on trust. And discretion. Solve those problems and you will unlock huge potential.
Not that this website is that bad... I just find this attitude to be annoying. :) Maybe I’m still in partial denial about my deficiencies.
Thank goodness for my browser’s reader mode though when the website has way too long lines. No credit to the website designers, of course.
I’m not sure about some of the footnote (raised) style hyperlinks. They could be difficult to use on mobile.
A brief rant on the future of interaction design (2011) - https://news.ycombinator.com/item?id=21116948 - Sept 2019 (153 comments)
A Brief Rant on the Future of Interaction Design (2011) - https://news.ycombinator.com/item?id=6325996 - Sept 2013 (35 comments)
A Brief Rant on the Future of Interaction Design - https://news.ycombinator.com/item?id=3212949 - Nov 2011 (150 comments)
For instance Nintendo has been exploring that space for so long, coming up at mass market level with different paradigms to interact with their devices. Microsoft has a long history of UX/UI research and actual shipped adaptive products, including for people with special needs. We’re going way past 2011 but there is also a ton of research on better haptic feedback and ways to make the “R” part of VR more real.
On the other side, I’m sure I’m not alone in disabling most of the input haptics and sounds and animations when setting up a new phone or laptop. Glass is fine for visual things, and I’m also fine with only visual feedback when using hand tracking on the Quest2 for instance, and the stuff I am manipulating are menus and lists, and I’m fine with no tactile feedback for fundamentally virtual concepts.
When you talk about haptics, you only mention passive haptic feedback: annoying bumps, buzzes, and rumbles. You're right that those are annoying and mostly useless (though the haptic feedback on MacBook Pro trackpads is quite nice).
But that's not what he's talking about at all. His whole point is that interacting with tool's isn't "fundamentally virtual". That's a choice that technology has made because screens of pixels are so amenable to software control.
If you want to, say, decide which restaurant to eat at, there's nothing intrinsically visual or 2D about that. We assume that a screen is the only natural way to do that simply because we're used to that paradigm, which is exactly the problem he's ranting about.
Imagine a "restaurant tray". It has, I don't know, physical sliders and buttons at the top where you can specify what kinds of restaurants you're OK with. When you do, a bunch of tokens appear on the tray for every potential restaurant. You and your party can reach out and grab them. Group a few together as potential ones. Sweep the ones no one wants off to the side. Maybe let each person take and hold their favorite, or pull them over to their side of the tray. Swap and trade them like poker chips.
Think about how much more immediate and collaborative that process would be for reaching agreement on where to go.
That's the kind of stuff he's talking about. You might be thinking, "Well it would be too hard to build a system that creates physical tokens for every possible restaurant." But, again, that's a technological problem we'll only solve if we have a vision for those kinds of experiences in the first place.
If we can't see beyond pixels on screens, we'll never get there.
I would call that the “why don’t we have flying cars” kind of thinking, and the restaurant example feels the same to me. To the “that's a technological problem we'll only solve if we have a vision for those kinds of experiences in the first place.”, that’s a valid point. I think were I stray from that is I don’t feel the appeal of the vision.
A restaurant giving me complex “a choosing experience” instead of presenting the traditionally “flat” list and menus is not an improvement from my perspective.
In general, I think we’re at a stage where interface that make more sense physically are still physical. For instance cars are completely electronically driven but we (still) have physically optimized controls. My oven is a computer, but I have nobs and buttons that click. My “smart” lights have rotating controls to adjust brightness, etc.
These interfaces are well into the world and I feel they’re not in the article because the author (and you perhaps ?) take it for granted.
Unfortunately, we're already sliding in the wrong directionn even on those examples.
Many newer cars have huge touchscreens and relegate many controls to it that used to have dedicated tactile inputs. A lot of newer electric ranges put all the controls on a screen under the same glass surface as the cooktop. Many smart LEDs only have an app to control them.
 like https://www.adverts.ie/other-electronics/dgt-electronic-ches...
But the general point is that when thinking about "interface", we often don't even consider that the interface could be made of physical things and instead implicitly assume pixels on screen. The idea that you could interact with a system not using video is practically unthinkable. Victor's point is that we can't invent what we can't imagine, so it's important to always have visions that seem unattainable and not give in to accepthing things as they are.
I agree 99% of people interact with very shallow feedback, and that’s I think a reasonable state, 99% of the applications we use are effectively shallow and require very simple input and only have a basic output (it makes me think about the amount of people using no more than 2 or 3 keyboard shortcuts for their daily tasks on a desktop computer, they’re not stressing about getting more from these interactions)
PS: I feel silly not mentionning on Apple or Samsung’s stylus with pressure and angle sensitivity…
Hard disagree here. As impressive as the hand tracking in the Quest 2 is (especially the new 2.0 iteration - just...wow.), I'll prophesize that until at least some kind of haptic feedback system is available (Powerglove 2.0! :) ) it will hit a brick wall. Hard.
It's such a natural mode of interaction that the total lack of haptic feedback sticks out like a sore thumb and breaks the immersion hard - so much so that I find myself going back to the Oculus controllers after a few minutes because they feel(!) more natural. They only might be a crude first order approximation, but at least there I have a "power grip", for example.
I see the future of VR as two-fold, with the use as a specific tool on one side, and games/entertainment on the other side.
On the tool side, I see hand tracking as 90% there, would use it mostly for navigation and input, and totally see myself in excel like applications with finger gestures and Minority Report like movements but actually well thought and useful.
I am with you one the game/entertainment side, where more feedback and immersion would tremendously help enjoy being in the moment and feeling things that aren’t there.
Overall I think it's interesting and bizarre that both modern technology and visions of the future have totally sacrificed tactility. It seemed to be all about removing the real world: tactile interfaces are old fashioned, in the future everything works in a way that has minimal connection to reality, e.g. a Minority Report style UI, and so obviously is ethereal and cannot be touched. It makes me wonder why we had that ideal in the first place, and whether that ideal shaped technology or vice versa. Why did we fantasise about losing tactility?
Something I've also noticed is that we almost seem to be unable to imagine a programmable tactile interface, even in science fiction. I guess humans wanted "something extra/futuristic/other-worldly", and that meant having things be unlike anything else in the world, which as the author points out, means something without tactility.
You still had the tactile interface, but you gained the flexibility of the dynamic interfaces.
Not sure how well it performed though.
Soft buttons are also pretty common for test and measurement equipment. I own many modern digital oscilloscopes and spectrum analyzers, all of which feature physical buttons around the screen that are selected by software to do different things
Apparently touchscreen throttle and helm controls didn't work out, though, and were removed. I haven't read anything about this since 2019, but here's the link: https://www.theverge.com/2019/8/11/20800111/us-navy-uss-john...
The set of laptops I'm willing to run non-macOS on is limited by the set of laptops that have physical touchpad buttons, and that's a diminishingly small market segment :(
Interested to hear if anyone has a setup like that that feels nice and is actually useful?
with a traditional mixing console, full of physical objects to press, grab, twirl, slide etc.
These are the two reasons the "minority report" style of interactive design will never catch on for humans.
This is a known problem, and one that will get people killed, because it demands that drivers focus on the infotainment screen and not the road to do a mundane task that their tactile fingers could easily perform on their own with mechanical controls.
In the 70's and early 80's--analog synthesizers--everything was knobs, sliders, and real buttons.
Then everything was stuffed behind a minimum of buttons and maybe 1 or 2 knobs/sliders if you wer lucky, and a plethora of options and menus buried behind often a thin 16x2 LCD display.
That was the early to mid 90's--analog sounds in electronic music started making a comeback, and by the latter half of the 90's and beyond everything started to have knobs and sliders again, even if the internals were no longer analog.
I do wonder if not for that trend driven by the TB-303 and what not, would that even have happened to synthesizers.
Maybe the same blacklash and pendulum swing will happen with everything else, but it seems like there's more pressure than ever to make users fit the product and save costs that way, rather than make the product for users and accept the price increases.
And the current OS X interface is nauseating. How we had decent GUIs when computers were 1/1000th what today's are, and this flat, ugly, undefined, pasty BS is acceptable is beyond me.
No joke, I'm returning the M1 machine and will go back to dual-boot Win11 and Ubuntu. Sorry, Apple, but you've lost your way.
Given that this was written in 2011, I commend the author for having an opinion, but it has aged rather like milk to me.
Smartphones are powerful because the interface is reconfigurable in software. That's orthogonal to whether the interface is tactile or not.
Imagine you go to choose between two smartphones:
1. One is like you have today: a flat surface of glass with colored pixels underneath.
2. The other supports a surface that physically changes in response to the application. Open the calculator, and a grid of number buttons appear. Tapping one gives the satisfying click of an old calculator. Switch to a synthesizer and a row of piano keys appear. The have the bounce of a weighted piano and play louder or softer based on how hard you press them. Open a game and a D-pad and joystick materialize.
I know which one I'd pick.
Dynamic Land might be the most interesting approach to collaborative computing in person that I’ve seen to date.
Bret designed the system, wrote the operating system, and the libraries used to interact with the OS via physical objects. It’s awesome.
Besides, the Tilt Five developer program YouTube commercial is the worst. It's like an xbox 360 reveal video at CES or a Qualcomm presentation about digital natives.
In what ways do you feel like TiltFive supersedes what Dynamic Land is doing?
[EDIT]: After reviewing the marketing materials, it looks to me that this project is orthogonal to the goals of Dynamic Land, and they don’t really serve the same purpose.
1) Dynamic Land is intended to allow for computing by interacting with real physical objects, and seeing the outputs displayed back in the real world without augmented reality hardware.
2) TiltFive seems intended to allow for holographic display of traditional or specialized game content onto physical objects. More like advanced AR than tangible or physical computing .
Have you used it? My gut feeling is that Dynamicland is likely something that has to be experienced to be understood. I saw Bret Victor present on Dynamicland at a design conference, and there are tons of little happy and unexpected accidents that come from people using it and experiencing it. It's stuff that you can't throw into bullet points.
What exactly makes one the opposite of the other?
Do you have a citation for this claim? The research I can find seems to show that signers and oral speakers convey concepts at roughly the same rate with ASL perhaps slightly more efficient than orally spoken english.
(Signers use fewer signs per second than oral speakers do words but compress more information into each sign, or perhaps simply omit extraneous information that has no effect on the meaning.)
>Constructionism2016 Session 16: Plenary 4, Cynthia Solomon
>One of the funny things Marvin Minsky did in his younger days is that he spend time with another very famous computerist, Claude Shannon.
>And Claude Shannon and Marvin came up with The Most Useless Box In The World.
>It, uh, I have a video of somebody... What it is, is um, actually, Claude -- Marvin designed it, and Claud built it.
>And it's a box. You turn it on, and a hand comes out and shuts it off. It goes back in.
>People don't know, but that's Marvin and Claude Shannon. Claude Shannon was the father on information theory. That's what he did.
>The best-known "useless machines" are those inspired by Marvin Minsky's design, in which the device's sole function is to switch itself off by operating its own "off" switch. Their popularity has recently been raised by commercial success. More elaborate devices and some novelty toys, which have a more obvious function or entertainment value, have been based on these simple "useless machines". [...]
>The version of the useless machine that became famous in information theory (basically a box with a simple switch which, when turned "on", causes a hand or lever to appear from inside the box that switches the machine "off" before disappearing inside the box again) appears to have been invented by MIT professor and artificial intelligence pioneer Marvin Minsky, while he was a graduate student at Bell Labs in 1952. Minsky dubbed his invention the "ultimate machine", but that sense of the term did not catch on. The device has also been called the "Leave Me Alone Box".
>Minsky's mentor at Bell Labs, information theory pioneer Claude Shannon (who later also became an MIT professor), made his own versions of the machine. He kept one on his desk, where science fiction author Arthur C. Clarke saw it. Clarke later wrote, "There is something unspeakably sinister about a machine that does nothing—absolutely nothing—except switch itself off", and he was fascinated by the concept.
>Minsky also invented a "gravity machine" that would ring a bell if the gravitational constant were to change, a theoretical possibility that is not expected to occur in the foreseeable future.
>The Ultimate Machine
>Davide Moises (1973-), The Ultimate Machine nach Claude E. Shannon,
Die Sammlung David Moises, Multimediale Installation, 2009
Technisches Museum Wien
>SMALL moody useless box " leave me alone " box. This useless box has an attitude. It has a variety of movements and behaves very cute when you shut it off. [...]
I like this vision a whole lot.
I don't think it's his vision; more like "anti-vision". ;)
What i would be much more interested in is eye tracking. I have three screens covered in interactivity, but the speed at which i can interact with them - which is the speed at which i can explore and experiment - is limited by the speed with which i can grasp my mouse, then shove it around to position the cursor. I'm convinced i could do many things so much more fluidly if the machine could see where i was looking at transfer focus there in a flash.
Also, while I'm here, the steering wheel is adjustable in height as in many cars, but I've never seen a vehicle with indicator lights on the driver's panel at the top and bottom (they're all at the top) so there's always chance you can't see them when you adjust the wheel height. I've driven a lot of cars as we often hire. Seems like a fundamental flaw to me.
I don't think it's sentimental to think it's undesirable to have one of your senses muted.
If it was measurable, I would guess that productivity plummeted for most companies that replaced the traditional computer/mouse/keyboard with a tablet. I remember going to an AT&T store years ago and watching the customer care rep struggle to get my information into their system with an iPad. A five minute data-entry task on a computer took this person almost 20 minutes on their tablet.
I believe this is a hugely under appreciated capability of humans, possibly one of the keys to a type of genius.