Innovation in data visualization? From a purely utilitarian view, the purpose of data visualization is to present data in a way people can understand it. If you’re constantly changing the method of visualizing the same thing it’s harder to do that. Sometimes a bar chart is best.
As far as cool visualizations go (that are better served as nonstandard visualizations) there are two recent ones that come to mind:
I'd also argue that even if all else is equal, a flashy visualization is worse than a conventional one, as you generally do not want to draw attention to the presentation if your aim is to convey information.
Fun counterpoint is that a flashier visualization may be 'better' than once optimized for accurate and efficient information conveyance IF it causes people to spend more time on the visualization. A flashy infographic conveys information less efficiently than a good chart, but if it causes people to engage with it more you may end up communicating your information to more people.
I love data visualization but it very much reminds me of shred guitar playing, something I also use to very much love.
What non-guitar players are complaining about the lack of innovation in shred guitar playing? It is just not something that non-guitar players really care much about. Good shred vs bad shred is all going to sound the same to the non-guitarist anyway.
It's strange to expect continuous innovation in anything where outputs are measurable against clear, stable targets. Once you've achieved the target, what further effort is necessary?
It's like asking what killed innovation in the shape of wheels. The answer is that we're able to make nearly perfect circular wheels, and there is no more optimal shape for a wheel than a circle, so no further gains for new innovations to capture.
Innovation is never constantly increasing. It usually appears in bursts, and stops around the point that humans don't need it as much, or development hits a ceiling of effort. But it's always slowly simmering. Usually it's research or yak-shaving that, after years, suddenly appears as if out of nowhere as a useful product.
I am hopeful that in my lifetime, the web will die. It's such an insanely stupid application platform. An OS on an OS, in a document reader (which, due to humans' ability to go to any lengths to avoid hard work, literally all new network protocols have to be built on top of).
You want cool visualizations? Maybe don't lock yourself into using a goddamn networked document viewer. Native apps can do literally anything. But here we are, the most advanced lifeforms on the planet, trapped in a cage of our own making.
> I am hopeful that in my lifetime, the web will die.
I'd like to see the www go back to its roots as a way to share and browse documents, hyperlinked together. The web worked when it was just documents to render and click on links. It is terrible as an application platform.
It's been 30 years since JavaScript was invented. Imagine what we'd have today, if instead of making the WWW into this half-assed application platform, those 30 years of collective brainpower were instead spent on making a great cross-platform native application development and delivery system!
The web as it was originally conceived - readable (but not interactive) content with linked resources - feels a far cry from the web of today, a platform for interactive applications that seems to grow asymptotically towards feature-parity with native applications (UI, input handling, data processing, hardware access) while never quite getting there, encompassing the fundamental things that make 'applications' work.
If the modern web _did_ reach feature parity in some way the real question would then be 'What makes it different?'. As linked resources doesn't seem like a particularly strong unique feature today the only other things I can think of are the simpler cross-platform experience and the ease of distribution.
So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?' Is it feasible or sensible to have users expect to access every application just by visiting a named page and getting the latest version blasted at their browser?
> 'What would make for a better cross-platform development experience?'
Back in the day, people balked at using Java as a universal portable application platform, because ironically, everyone wanted their own language and their own platform semantics. Yet everyone on the planet has already unanimously agreed to a cross-platform development platform with a strict set of languages:
- HTML/CSS for presenting text/style/layouts
- Javascript for crafting and organizing the UI
- WebAssembly for compiled code run in a VM
- HTTP for IPC
So right there you have prescribed languages, formats, protocols. We gave up a single simple prescribed system for a more complicated one.
However, if you wanted to, you could get rid of the browser, and still support those same things - plus everything else a native app can do.
Separate those components into libraries, then link those libraries into your app, have your app load the libraries & components you want, and configure how the library should handle the presentation, UI, and business logic. You're then basically shipping a custom, streamlined browser - except it doesn't have to obey browser rules! It can run any code you want, because it's your app. It can present any way you want, because it's your app.
But it's also portable! The UI, the business logic, presentation, would all be using libraries ported to multiple platforms. Just recompile your app on the target platform. Avoid native code for an instant recompile, or add native code and deal with the portability tax.
It's sort of like compiling one browser on multiple platforms, except 1) it's broken down into components, 2) you can interchange those components with ones from different vendors that follow the same standards, and 3) you are not limited to these libraries - you can add your own native code.
In terms of interoperability with other apps, use the same network protocols, the same URIs. You can add new ones too, if you prefer, just for your own custom backend for your app... but probably everyone will stick to HTTP, because it's a universal way to access everyone's backend APIs or other native apps across sandboxes.
Also, I loved mozplugger. Embedded applications inside the browser.
For the younger HN users, mozplugger basically opened some videos/ audio / documents, by simply embedding some native window from an application inside the browser.
That's it. For Windows users, imagine the Sumatra PDF subwindow inside a browser tab to open the PDF's. Or VLC for HD videos.
Video playing with mplayer under Linux was really fast in late 00's/early 10's.
Far better than today with Chrome embedding maybe ffmpeg like ass, not delivering the whole performance. Because even with WebkitGTK4, running MPV with some h264 based settings outperforms Luakit on <video> tags by a huge margin.
> So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?'
The question for me isn't what would make for a better developer experience, it's what would make for a better user experience. And, personally, as a user, what makes my experience better is when I get to decide how an app looks and works, and the developer's say in that is more limited. That is the big flaw with web apps: too many app authors want to be artists and want to create custom interfaces and lots of shiny gizmos. I want apps to generally be using standardized widgets like menus and buttons whose look and feel is determined by the user's platform preferences and are not a bespoke creation of the app author.
I couldn't agree more, though it's kind of unfortunate that OS's are also going in the same direction. As meaningful innovation has, more or less, ended in the OS world - instead it's OS with shiny things I don't want, making controls look different just for the sake of looking different - sorry "new", packed with ad-tech I don't want, and soon undoubtedly to be packed with "AI" tech which I don't want.
And all of this forced upon you thanks to imposed obsolescence with things like hardware [in]compatibility with older OSs. The entire tech world is really screwed up with how adversarial companies have become with their "customers."
The web gained traction as a development platform because for the most part, it broadly works the same on every device due to the web standards, and so it's very easy to develop something that works consistently on all the different devices. Purists may bemoan that things no longer respect the "native look and feel" but that is a feature, not a bug, for the vast majority of users and developers. As an example, I absolutely hate that my work email on Outlook does not have the same feature set on Windows vs Mac vs whatever, and even in scenarios where application developers want to deliver the same features everywhere the minutiae of the native development patterns make it like herding cats.
It is basically the electrical plug of our era, in that it is a means to an end, never mind if 110V 60Hz is necessarily the most efficient way to deliver power in the home in North America.
We have JavaFX and Qt, and they're both better than ever, but they don't see much use. With JavaFX you can build and distribute a portable .jar file, and I think it can be used with JNLP/Java Web Start for distribution if you prefer that approach. With Qt, you're likely to be delivering self-contained native application packages in the target platform's native form.
(JavaFX has been carved out of the core JVM, which is annoying, but if the target machine has a JVM installed that bundles JavaFX, you're all set.)
I'm not sure you can say that Qt doesn't see much use, when there are hundreds, probably thousands, of well-known commercial apps using it, and it's in millions upon millions of embedded systems. And obviously KDE as well.
It's just it's largely invisible. It works, it does exactly the job it's meant to do, and it does it well.
Native for Windows, Macs, Linux, iPhones and Android devices?
Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
Since I do use multiple devices, when everything is on the web, you also don’t have to worry about syncing or conflict resolution like you do with semi connected scenarios.
> Native for Windows, Macs, Linux, iPhones and Android devices?
> Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
With the tools we have now, it would absolutely not work. In my post I was imagining a parallel alternate universe where native development tools got all the brainpower and innovation over the last 30 years, instead of the web tools getting it.
The web got it because for some _insane_ reason, websites were able to convince IT departments to allow scripts to run.
That left the barn door unlocked. Suddenly the download everything every time (or hope some is at least cached) environment of JavaScript / ECMAScript became the ONE place a user could 'for sure' 'install' (run someone else's unapproved) program.
-
Websites, _really_, should work just fine with zero scripts turned on. Possibly with the exception of a short list of trusted or user approved websites.
It can work. I have spent several years of my life making it work :) Obviously, on mobile you have app stores including enterprise app stores.
On desktop you don't, or you do but they suck so people often don't want to use them. Making shipping desktop apps as easily as you can ship a web app is the goal of my company [1] and although it's always a work in progress it does get very close to that goal now. In particular:
- Once set up you can deploy with a single command that builds, signs, integrates auto update engines, renders icons, uploads and everything else. It's not harder than uploading a new version of a web app.
- Apps can do synchronous update checks on startup, web style, so once you push a new version everyone will get that version when they next start up the app. You can easily implement version checks whilst the app runs to remind users to restart after a while (big SPAs have the same issue if the server changes whilst user's tabs are still open).
- Apps can update in the background too.
- Delta updates make update times very low once the initial install is done.
- On Windows, apps from different vendors can share files. So if two apps share a runtime and it's stable enough that files don't change totally from version to version, the user won't have to download them. Unfortunately on other platforms this nice trick doesn't work, and in a world where runtimes update every few weeks it's of less value. But still.
Native development's story has been, for the longest time, the native way or the highway, which is precisely why it failed in favor of web where everyone begrudgingly supports a common feature set. No one wants to implement the same feature in five different native idioms.
And unfortunately, every system had their own way. Native development has suffered for decades from stubborn OS vendors who are incentivized against portability and cross-platform interoperability. Everyone's "way" was different and required you to rewrite your application to support it. I always wonder, if the web never took off as the common platform, would we today have better cross-platform native tools and APIs? I guess we'll never know.
Yes, but in fairness if operating systems don't differ then what's the point in having them.
This is the "what killed [os] innovation" problem: if nobody is writing native apps then there's no incentive to add new capabilities to your OS.
What's really needed here is a platform with the same advantages as the web, but which makes it much easier for OS vendors to expose proprietary capabilities. Then you will get innovation. Once browser makers killed off plugins it was over for desktop operating systems as outside of gaming (where innovation continues apace), there was no way to let devs write 90% cross platform code whilst still using the 10% special sauce.
Versus an interpreted language executed in a runtime running on a virtual thread of an OS running on top of a BIOS over a glorified calculator!? Insanity! Whatever happened to good old-fashioned pen and paper!?
There's nothing wrong with the model of delivering your software as a small program to run in a sandboxed browser environment. WASM, canvas, WebGL -- you can do nearly as much on the web as native nowadays, with a dead-simple deployment model. One of the only type of programs that's much harder to make in a web application is malware. Calling a modern browser a "networked document reader" is as silly as calling a modern computer a calculator.
The DOM seems fair to call a networked document reader. You've suggested a different build target for what would have been native apps - I think you and OP meet in the middle a bit; you get the power of non-html app development. OP laments the overhead of having to shove that into the existing web model designed for documents; you appreciate the sandboxing.
I think you have similar opinions that mostly overlap, regardless of insults about statements being silly.
The gist is that native sandboxing is a mess of undocumented APIs, very different approaches between operating systems, one-size-fits-all policies, kernels are full of bugs, the whole setup is a nightmare to debug and to top it off there are no useful cross-platform abstractions. Not even Chrome has one; beyond Mojo the sandbox is a pile of special cases and platform specific code all over the codebase.
In HN spirit / guidelines, I'm going to presume the best.
Did you mean: "the web (as an application platform) will die" / once again swing back from mainframe / thin client to powerful local computing platforms?
In the spirit of empowering the user, I too hope the average user once again owns their destiny, the storage, computation, and control of their data. Though I think the web as a publishing media does empower that user if there are open platforms that promote the ability to choose any fulfillment partner they desire.
The web is just a convention that gained rapid adoption so now browsers dominate software. As far as conventions go, it is not bad compared to some of the stuff humans have landed on. Better than paving over everything so we can drive and park cars all over, better than everything being single use and disposable. Web has it's ups and downs but it is decent based on our track record.
I am exploring an alternative browser-like platform concept that would allow for near native performance. However established web protocols are hard to overcome.
Same here, although most of my UI work is on the Web nowadays, I miss the days when the browser was only for hypertext documents, and everything else was done natively with networking protocols, and standards.
Hence why my hobby coding has nothing to do with Web technologies, Web pays the bills, other stuff is for fun.
Even a small amount of data literacy makes you aware that visualizations can deceive. Pie charts make humans overestimate large percentages, nonzero axis is borderline fraud, choice of colors can totally warp color scales.
I think that in this context it is expected for data literacy to make people suspicious of complex visualizations.
Data literacy should come down to the data itself, not only the visualization of those data. Sure pie charts are the bane of Tufte’s existence but even the best data visualizations of a particular segment of data can be misleading due to misrepresentation of the data underneath from collection to its analysis.
People should be far more skeptical of what they are fed. Data narratives are often misleading with manipulation of the data, its aggregation, visualization, and especially the interpretation within context. Data literacy needs to address all of these, not simply the how it’s visualized; that’s simply the final step in the entire data and information lifecycle.
I’m not saying “do your own research;” instead, folks should think critically about what they’re seeing and attempt to understand what’s presented and put it inside the appropriate context before taking anything at face value that they’re shown, by any organization.
This is an outrageously reductive meme that has long outstripped its actual usefulness and needs to die. The axis and scale should represent the useful range of values. For example, if your body temperature in Fahrenheit moves more than 5 degrees in either direction, you're having a medical emergency, but on a graph that starts from zero, this would barely be visible. Plotting body temperature from zero would conceal much more than it reveals, which is the opposite of what dataviz is supposed to do.
The only reasonable zero-value for temperature is 0K, which unfortunately leads to unreadable graphs. (All other temperature scales are completely arbitrary.) So for the specific case of temperatures, it is in fact completely reasonable to have a nonzero axis. But most graphs are not temperatures.
Pie charts are just as unreadable for medium and small percentages. They encode values as angles. Human perception is not suited to estimating angles relative to each other.
Just looking at that "512 paths to the white house graphic", and I'd argue that it's more confusing than useful. Why is Florida at the top? Consider the point where it's "Obama has 255 ways" and "Romney has 1 way". What's the point of the massive arrow to Florida and then taking a very specific route to success? This would only make sense if there is a pre-determined order in which the results must come.
The way it's been done in the past in the UK, for instance, is "A needs X more seats to win, B needs Y more seats to win, Z more seats remain". Simple, clear, and no flashy graphics required.
I know the situation in the US is a bit more complicated with different numbers of representatives per state, but it's still not especially useful to prioritise one state over another in the graphic, because what's important is the relative difference between the totals so far received.
I get that there could be some more presentation towards uncalled results and the expected outcome, but it doesn't look like that graph gives that, which would be far more useful than this thing with arrows.
As you mention, the number of electors per state varies by quite a bit. E.g., in the 2012 election covered by the chart, Florida had 29 electors, Ohio had 18 electors, and North Carolina had 15 electors, which is why those three states appear at the top.
The main important effect is that (with only some small exceptions) if a candidate wins a simple majority of the votes in a state, then they receive all of that state's electors. E.g., if a candidate wins 50.01% of the Florida vote, they get 29 electors, but if they win 49.99% of the vote, they get 0 electors. See: the 2000 election, where the overall outcome depended on a few hundred votes in this way.
This means there's a lot of focus on 'flipping' states one way or the other, since their electoral votes all come in blocks. What the chart is showing is that if Romney won Florida, he could afford to lose a few other contested states and still win the national election. But if Obama won Florida (as he in fact did), then Romney would need every other state to go his way (very unlikely!) if he still wanted to have a chance.
That is to say, Florida really was extremely important, given the structure of U.S. presidential elections: it would make or break a candidate's whole campaign, regardless of what happened in the rest of the country. And similarly, the remaining states are ordered by decreasing importance.
Of course, while results are being counted, you also see simpler diagrams of the current situation. The classic format is a map of the country with each state colored red or blue depending on which way it flips. This is often accompanied by a horizonal line with a red bar growing from one side, a blue bar growing from the other side, and a line in the middle. But people are interested in which states are more important than others, which creates the imagery of 'paths to win'.
Except, in the extreme example I cited from the article: "Obama has 255 ways" and "Romney has 1 way"
At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
I'd also argue that the point you're making is obscured by this image. There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome. IMHO, how it was actually shown on news programs today is much clearer than what the article is proposing.
> At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
When that diagram was created, the election hadn't even started yet, so they didn't know who would actually win Florida. The 1/255 number was contingent on Romney losing Florida (i.e., it was a hypothetical outcome the user could click through). But when they didn't know yet who would win Florida, it was still 76 ways for Romney and 431 ways for Obama.
Anyway, Florida was very important for the result regardless of the chronological order. Suppose that Florida's result was called first, in favor of Obama. Then one more state for Obama would seal the election in his favor, and everyone could call it a day and go home.
On the other end, suppose that Florida's result was called last, and Obama won at least one state beforehand, but not too many. Then everyone would have to wait for Florida in order to know the overall outcome: its result would be absolutely necessary.
> There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome.
Well, if people really wanted to work out the math by hand, they could look it up. But states are called one at a time, so people are naturally interested in 'subsets of states' that can singlehandedly seal the election one way or the other.
The US news covers the US elections from a really strange angle. They act as though even as the votes are coming in, and there is nothing more the candidates can do to change the outcome, that they are still "looking for a path to victory" and they list all of the "paths to victory" that could be possible. As though we're watching them stumble through a dark forest.
I'm not sure about this. Why do we constantly need new ways of presenting data?
My main concern is that eventually it becomes easy to read and interpret data, especially for people that are not used to it, that are less data- or science-savvy in a way. That it's accessible.
It's good to try to find better ways to present certain cases, but also it's only needed as far as it's useful, and otherwise I feel consistency is way better instead of keeping churning out new ways to look at it that require an effort on the consumer part (no matter how beautiful / well presented this is) to figure out what they want to know out of it.
Innovation for the sake of usefulness is good. Innovation for the sake of innovation feels... definitely not as good (although I wouldn't discard it completely).
Have we already achieved the absolute optimal ways to visualize data? Maybe in some simple cases, yes, but not necessarily in all practical cases.
Should new and better ways to visualize data look drastically different from what we're used to? Maybe, but likely not very often. Revolutionary changes are rare, and incremental improvements are important.
Every example in the article suffers from excessive visual complexity and a lack of clarity as to what's being quantified and how values relate to each other.
The best one is the "four ways to slice the budget" visualization, but even that would just have been better as four separate, straightforward charts.
I guess what killed innovation in data visualization is that the innovations were hindering, rather than helping, to accomplish the purpose of building data visualizations.
The economics don't support innovative web visualizations, a slight engagement boost for a day is the return on investment. If you're lucky it goes viral on social media, but there's far cheaper ways to accomplish that (e.g. inflammatory rhetoric).
There was probably a core of 50 people mainly responsible for these (with hundreds of thousands in awed aspiration/inspiration) who've since retired or moved on to other interests or got distracted by politics in the meantime after 2016, or any other similar reason. It was probably Mike Bostock's departure from the scene in 2017 that was the core catalyst.
The point of data presentation is to gist the most salient trends; an interactive chart where you can zoom in to the lowest granularity of the data basically defeats the purpose of the plot in the first place. Similarly most animation in charts doesn't really add any meaningful visual data, it's just distracting. I think most consumers of data journalism got pretty bored of scrolling through some massive viz after only a few minutes, and why would they not? People read the news to have the critical points surfaced for them. They don't want to dig through the data themselves (and if they do they're not going to be satisfied with the prebuilt animation). These kinds of things are IMO more fun and interesting to build, rather than to actually try and learn something from.
When a new technology comes along no one knows what ideas are good and what ideas are bad, so people try a bunch of things and most of them aren't very useful and the few that are become standardized. In the case of UX stuff like visualizations users also learn the grammar of the technology and get used to seeing things done in certain ways, which makes it harder to do things differently.
So basically there's less innovation in data visualization because we mostly figured out how to solve our data visualization problems. If you look at the history of printed visualizations I think you'd find a similar pattern. The only somewhat recent innovation I can think of there is the violin plot, which became possible due to advances in statistics that led to probability distributions becoming more important.
Sometimes something is a "solved" problem. There hasn't been a lot of innovation in say, firearms, because we pretty much figured out the best way to make a gun ~100 years ago and there isn't much to improve.
Not everything needs innovation, and trying to innovate anyway just creates a solution in search of a problem.
They said the same thing about hash tables. Innovation from a single individual blew away (no pun intended) all prior expectations and opened an entirely new baseline understanding of this.
Just because we THINK we’ve solved the problem doesn’t mean coming at it from an entirely different angle and redefining the entire paradigm won’t pay dividends.
Sure, and no one is saying that people should stop experimenting and testing out alternative approaches. But we wouldn't expect to see experimental approaches displacing established conventions in mature use cases unless they actually are major breakthroughs that unambiguously improve the status quo. And in those situations, we'd expect the new innovations to propagate rapidly and quickly integrate into the generally accepted conventions.
But there's obviously going to be something analogous to declining marginal utility when trying to innovate in mature problem spaces. The remaining uncaptured value in the problem space will shrink incrementally with each successive innovation that does solve more of the problem. So the rate at which new innovations propagate into the mainstream will naturally tend to slow, at least until some fundamental change suddenly comes along and modifies the constraints or attainable utility in the context, and the process starts over again.
That's true enough. We don't know what we don't know, and there's always the potential for some groundbreaking idea to shake things up. That's why it's important to fund research, even if that research doesn't have obvious practical applications.
But this sort of innovation comes from having an actual solution that makes tangible improvements. It does not come from someone saying "this technology hasn't changed in years, we need to find some way to innovate!" That sort of thinking is how you get stuff like Hyperloop or other boondoggles that suck up a lot of investments without solving any problems.
My irrational side really laments where many parts of modern life are in this process and how...standardised things have become.
When I look at e.g. old camera designs, they are so much more exciting to see evolve, and offer so many cool variations on "box with a hole and a light sensitive surface in it".
Seeing how they experimented and worked out different ways to make an image-making machine with that requirement, I feel like I'm missing out on a period of discovery and interesting development that now is at a well-optimised but comparatively homogenous dead end.
Maturation killed what is considered innovation in this article. Many if not most of the visually impressive but didactically confusing innovations fell by the wayside. We're currently in a new wave of 'innovation' with LLM-generated summaries and 'helpful' suggestions being added here there and everywhere. Most of those will disappear as well once t becomes clear they do not add real value or once terminal devices - browsers etc - have such functionality built-in using locally executed models which are trained on user preferences (which will be extremely enticing targets for data harvesting so they better be well-protected against intruders).
The wave of innovation starts with "ooh shiny new thing" and ends with camps of made up minds. In the case of data visualization, you have the no frills analyzers in one camp who only see visuals as distractions at best and sleight of hand at worst, and short attention span info-tainment consumers in the other camp that are not only easy to please but may even find your overly elaborate data driven stories annoying. What remains is a vanishingly small venn diagram of data-savvy readers and practitioners.
While I didn't agree with a lot of his ideas, this one has proven true over time.
If you meant innovation in a scientific/startup context, than the reasons are as follows:
1. Space: Rent seeking economies create commodities out of physical locations used to build high risk apparatus. Even university campus space is often under synthetic scarcity.
2. Time: Proportion of bureaucratic and financial investment pressure constrain creative resources actually used to solve some challenge.
3. Resources: Competition and entrenched manufacturing capacity asymmetry. Unless one can afford to play the Patent game... people will simply steal/clone your work to fragment the market as quickly as possible. Thus, paradoxically degrading technology markets before it may mature properly through refinement.
4. Resolve: Individuals focused on racketeering and tying as a business model generally case harm to the entire industry through naive attempts at a monopoly.
5. Respect: Smart people do not choose to be hapless, and simply vote with their feet when their communities are not longer symbiotic.
There are shelves full of technology the public won't see for years, as there is little incentive to help rip off consumers with Robber Baron economics. This is why "we" can't have nice things... and some people are 10 years into the future. =3
“Some information will always be best conveyed in a straightforward bar or line chart, particularly for audiences that don’t have time to engage deeply"
Good ones are expensive to create and turns out there isn’t that much money in it. It wasn’t clear this was the case early on when HTML5 came out and really enabled these experiences. But after you make a few and realize how much goes into creating them, and how hard it is to extract value from them, it doesn’t make that much sense.
Also, that us election needle from 2016 really turned a lot of people off of to the whole genre, I think.
The assumption that data visualization innovation is declining needs more evidence. Regardless the article asserts it’s true then contorts itself arguing for it.
As others have said - the “patterns” have been invented & some pretty good stuff is now out there. For example:
- many science & data exploration youtube channels
There’s less need to create completely new techniques after the Cambrian Explosion is over, we already have a baseline of patterns & can draw from them & it’s less work. That doesn’t look as much like innovation but has plenty of value.
Even cooler: thanks to dataviz folks past and present, plenty of visualization libraries exist, almost all are open source, and they work spectacularly well. Give your data to any reasonably new reasoning LLM; ask it something like “write some code to visualize the story that’s in this data”, and prepare to be wowed at what it can produce.
The takeaway is be careful assuming something went away. Maybe new stuff is still happening but is a minority of the overall stuff that exists.
There’s great data viz out there, none of which has gone away & all of which can be used as a starting point without inventing from scratch. The “invention rate” declining is part of the cycle.
As someone who contributed to some of those waves in terms of core tech, got various niche recognitions for it, and collaborated+employed folks here, I don't think it died but continues to evolve. (The Graphistry team helped with GPU dataframe / viz movement, apache arrow, etc, and pay our bills with folks needing to answer tricky graph questions such as on linking events or people.)
Much of the shift IMO is about who & goal, and quite exciting: gen AI is unleashing a lot of old & new ideas.
Jupyter, then streamlit, got big. In parallel, ML and now AI. So less on bespoke viz on large data, often with superficial levels of analytics. Now more on ML & AI pipelines to both understand more, and in the generative era, do more. Amazingly, you don't need to be a js wunderkind either, almost anyone with a database can do it.
Nowadays as our team is launching Louie.ai (genAI notebooks, dashboards, & pipelines), it is in part using gen AI to make all our years of GPU/graph/ai tech easy for more of our users. But the best viz is no viz, and the second best is just an answer or a list, and with genAI, one you can ask questions to. So viz is not a goal but a means, and part of a broader process like investigation and automation. We are having to start over a LOT, and so it is not dead, but changed.
Funny enough, one of the last public talks I gave before starting Graphistry all those years ago was at Strangeloop (10 years ago?) where I was describing program synthesis as the future of visualization, and demoing prompt engineering with SAT-solver-era tools. However, that wasn't powerful enough back then for what BI users needed in practice, so the program synthesis leg of our vision just wasn't practical until gpt-3.5/gpt-4 came out 2 years ago. As soon as that happened, we resumed. Building louie.ai has been incredibly stimulating and exciting now that we can finally do it all!
These are an innovative way to sell New York Times subscriptions, but most of us aren't making charts for interactive marketing.
These sorts of animations are cool, but my experience has been that if you have to deal with them daily, you'll want a way to turn them off. People will jam a bunch of charts with considerable data onto the page, and the animations will barely work due to the strained performance.
Perhaps it's just that data visualization has simply matured, and the field has converged on a narrow set of designs that are proven to work? The earlier, "experimental" examples given by the author are indeed beautiful, but I'm not really sure all the fancy animations help me grasp the underlying data.
I did guerrilla usability testing around teaching scale, which included this video[1] (stop-motion animation using CO molecules). Lots of people asked "What are those ripples?". IBM even had a supplementary webpage addressing this (which I no longer see, even on archive). People could easily ask this with me standing beside them, but not so much if viewing the content online. Which raised the UI question of how to encourage such.
With LLMs, perhaps people will be able to ask questions of a visualization? What is that? Why is that? What about ...? I don't understand ... Does this mean ...?
Innovation? We didn't even exploit Atom netbooks in the same way the game developers did with the Game Boy iterations creating something astounding, such as the Cannon Fodder port for the GBC.
Make Luakit (anything WebkitGTK4) usable under a GB of RAM and some n270 CPU and then we'll talk. No, I am not talking about WebGL games or WebGL earth. Simple JS websites with text and maybe some video. Make the <video> tag playing as fast as MPV does with my custom config being able to play 720p/30FPS videos perfectly fine.
OTOH, sites from http://wiby.me and http://lite.cnn.com work extremely fast. On gopher, gopher://magical.fish it's like having a supercomputer to read news and access some services. Even gopherpedia and weather, too.
It shouldn't be that difficult. A Pentium 4 does SSE2. Atoms did SSE3. FFS, web browsing under an SSE2 machine was crazy fast and it ran older Mozilla/Gecko based browsers like butter. It was the minimum to run Retrozilla/Firefox 52, nothing fancy.
From https://worksinprogress.co/issue/how-madrid-built-its-metro-... contrast Madrid's train stations, copy and pasted as much as possible, vs London's, where each has its own interesting and complex architecture. The article claims simple, consistent Madrid stations were easier to build. Idk if it's true or not, but it's an appealing argument that architects are interested in architectural uniqueness and complexity which adds costs.
Similarly, the data viz architect sort of assumes more complex visualizations would be helpful if the public wasn't so phone addicted and inattentive: "some data stories are just too complex and nuanced to be aggregated in bar charts and line charts. Surely, they’d be curious about more complex visualizations" ...well where's the discussion of if visualization complexity is actually good for data stories? If everyone knows how to read a bar chart and a line graph, that's a point in favor of standard copy and paste visualization.
The one case where imo fancy visualizations actually help is maps. Not only is the example of facebook friendships in 2010 cool, a world map really is the best way to show where people/connections are (ofc maybe it's just a heatmap ie https://xkcd.com/1138/ idk if they divided by how many people live there but still cool). So there are probably lots of stories a map visualization helps tell by showing where stuff is.
But yeah in general I felt like the article spoke to data viz only as art to wow data connoisseurs. There was no defense of why complex stories getting a new data format each time conveys information better than standard bar chart/line graph.
Part 1 of this post https://www.shirleywu.studio/notebook/2025-02-client-comfort... probably speaks more to why it's good to do visualizations more creative than people are used to. I still get the sense of the architect who has more interest in complexity than the client (who's both uninterested and disinterested) though.
Is it just me, or is this "paths to victory" metaphor for presidential elections commonly used by US media a pretty strange way to "narrate" an election outcome, all things considered?
The outcome is effectively fixed, although unknown, once the last polling stations close, so what's with all the evocations of physical phenomena ("blue wall", "battleground states" etc.) when everything that's still happening is a gradual discovery of a completely determined outcome?
It's a way to make sense of chaos and generate a narrative to follow, also how millions of individual votes get reduced into large scale trends (gen alpha men want X, Boomer women living in suburbs want Y, etc).
People are invested in the outcome and want to know if things are headed in the direction they desire.
Innovation in data visualization? From a purely utilitarian view, the purpose of data visualization is to present data in a way people can understand it. If you’re constantly changing the method of visualizing the same thing it’s harder to do that. Sometimes a bar chart is best.
As far as cool visualizations go (that are better served as nonstandard visualizations) there are two recent ones that come to mind:
https://youtu.be/TkwXa7Cvfr8 (Especially around 16:56)
https://bbycroft.net/llm
I'd also argue that even if all else is equal, a flashy visualization is worse than a conventional one, as you generally do not want to draw attention to the presentation if your aim is to convey information.
Fun counterpoint is that a flashier visualization may be 'better' than once optimized for accurate and efficient information conveyance IF it causes people to spend more time on the visualization. A flashy infographic conveys information less efficiently than a good chart, but if it causes people to engage with it more you may end up communicating your information to more people.
I love data visualization but it very much reminds me of shred guitar playing, something I also use to very much love.
What non-guitar players are complaining about the lack of innovation in shred guitar playing? It is just not something that non-guitar players really care much about. Good shred vs bad shred is all going to sound the same to the non-guitarist anyway.
It's strange to expect continuous innovation in anything where outputs are measurable against clear, stable targets. Once you've achieved the target, what further effort is necessary?
It's like asking what killed innovation in the shape of wheels. The answer is that we're able to make nearly perfect circular wheels, and there is no more optimal shape for a wheel than a circle, so no further gains for new innovations to capture.
I don't have time for cool or innovative visualization. Show the bar chart, give me higher or lower is better.
I thought the youtube link would be this https://www.youtube.com/watch?v=SwIyd_gsGWA
Innovation is never constantly increasing. It usually appears in bursts, and stops around the point that humans don't need it as much, or development hits a ceiling of effort. But it's always slowly simmering. Usually it's research or yak-shaving that, after years, suddenly appears as if out of nowhere as a useful product.
I am hopeful that in my lifetime, the web will die. It's such an insanely stupid application platform. An OS on an OS, in a document reader (which, due to humans' ability to go to any lengths to avoid hard work, literally all new network protocols have to be built on top of).
You want cool visualizations? Maybe don't lock yourself into using a goddamn networked document viewer. Native apps can do literally anything. But here we are, the most advanced lifeforms on the planet, trapped in a cage of our own making.
> I am hopeful that in my lifetime, the web will die.
I'd like to see the www go back to its roots as a way to share and browse documents, hyperlinked together. The web worked when it was just documents to render and click on links. It is terrible as an application platform.
It's been 30 years since JavaScript was invented. Imagine what we'd have today, if instead of making the WWW into this half-assed application platform, those 30 years of collective brainpower were instead spent on making a great cross-platform native application development and delivery system!
The web as it was originally conceived - readable (but not interactive) content with linked resources - feels a far cry from the web of today, a platform for interactive applications that seems to grow asymptotically towards feature-parity with native applications (UI, input handling, data processing, hardware access) while never quite getting there, encompassing the fundamental things that make 'applications' work.
If the modern web _did_ reach feature parity in some way the real question would then be 'What makes it different?'. As linked resources doesn't seem like a particularly strong unique feature today the only other things I can think of are the simpler cross-platform experience and the ease of distribution.
So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?' Is it feasible or sensible to have users expect to access every application just by visiting a named page and getting the latest version blasted at their browser?
And I guess that's how we got Chrome OS.
> 'What would make for a better cross-platform development experience?'
Back in the day, people balked at using Java as a universal portable application platform, because ironically, everyone wanted their own language and their own platform semantics. Yet everyone on the planet has already unanimously agreed to a cross-platform development platform with a strict set of languages:
So right there you have prescribed languages, formats, protocols. We gave up a single simple prescribed system for a more complicated one.However, if you wanted to, you could get rid of the browser, and still support those same things - plus everything else a native app can do.
Separate those components into libraries, then link those libraries into your app, have your app load the libraries & components you want, and configure how the library should handle the presentation, UI, and business logic. You're then basically shipping a custom, streamlined browser - except it doesn't have to obey browser rules! It can run any code you want, because it's your app. It can present any way you want, because it's your app.
But it's also portable! The UI, the business logic, presentation, would all be using libraries ported to multiple platforms. Just recompile your app on the target platform. Avoid native code for an instant recompile, or add native code and deal with the portability tax.
It's sort of like compiling one browser on multiple platforms, except 1) it's broken down into components, 2) you can interchange those components with ones from different vendors that follow the same standards, and 3) you are not limited to these libraries - you can add your own native code.
In terms of interoperability with other apps, use the same network protocols, the same URIs. You can add new ones too, if you prefer, just for your own custom backend for your app... but probably everyone will stick to HTTP, because it's a universal way to access everyone's backend APIs or other native apps across sandboxes.
I wish an embedded TCL won over JS.
Also, I loved mozplugger. Embedded applications inside the browser.
For the younger HN users, mozplugger basically opened some videos/ audio / documents, by simply embedding some native window from an application inside the browser. That's it. For Windows users, imagine the Sumatra PDF subwindow inside a browser tab to open the PDF's. Or VLC for HD videos.
Video playing with mplayer under Linux was really fast in late 00's/early 10's.
Far better than today with Chrome embedding maybe ffmpeg like ass, not delivering the whole performance. Because even with WebkitGTK4, running MPV with some h264 based settings outperforms Luakit on <video> tags by a huge margin.
> So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?'
The question for me isn't what would make for a better developer experience, it's what would make for a better user experience. And, personally, as a user, what makes my experience better is when I get to decide how an app looks and works, and the developer's say in that is more limited. That is the big flaw with web apps: too many app authors want to be artists and want to create custom interfaces and lots of shiny gizmos. I want apps to generally be using standardized widgets like menus and buttons whose look and feel is determined by the user's platform preferences and are not a bespoke creation of the app author.
I couldn't agree more, though it's kind of unfortunate that OS's are also going in the same direction. As meaningful innovation has, more or less, ended in the OS world - instead it's OS with shiny things I don't want, making controls look different just for the sake of looking different - sorry "new", packed with ad-tech I don't want, and soon undoubtedly to be packed with "AI" tech which I don't want.
And all of this forced upon you thanks to imposed obsolescence with things like hardware [in]compatibility with older OSs. The entire tech world is really screwed up with how adversarial companies have become with their "customers."
Yeah, when I found out what "client side decorations" are a little bit of me died inside. :-)
The web gained traction as a development platform because for the most part, it broadly works the same on every device due to the web standards, and so it's very easy to develop something that works consistently on all the different devices. Purists may bemoan that things no longer respect the "native look and feel" but that is a feature, not a bug, for the vast majority of users and developers. As an example, I absolutely hate that my work email on Outlook does not have the same feature set on Windows vs Mac vs whatever, and even in scenarios where application developers want to deliver the same features everywhere the minutiae of the native development patterns make it like herding cats.
It is basically the electrical plug of our era, in that it is a means to an end, never mind if 110V 60Hz is necessarily the most efficient way to deliver power in the home in North America.
We have JavaFX and Qt, and they're both better than ever, but they don't see much use. With JavaFX you can build and distribute a portable .jar file, and I think it can be used with JNLP/Java Web Start for distribution if you prefer that approach. With Qt, you're likely to be delivering self-contained native application packages in the target platform's native form.
(JavaFX has been carved out of the core JVM, which is annoying, but if the target machine has a JVM installed that bundles JavaFX, you're all set.)
I'm not sure you can say that Qt doesn't see much use, when there are hundreds, probably thousands, of well-known commercial apps using it, and it's in millions upon millions of embedded systems. And obviously KDE as well.
It's just it's largely invisible. It works, it does exactly the job it's meant to do, and it does it well.
Indeed JavaFX is better than ever ;-) See https://www.jfx-central.com/ for many example applications, libraries, tutorials, etc.
Because most folks nowadays rather ship Chrome with their application, and then they complain Google has taken over the Web.
Native for Windows, Macs, Linux, iPhones and Android devices?
Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
Since I do use multiple devices, when everything is on the web, you also don’t have to worry about syncing or conflict resolution like you do with semi connected scenarios.
> Native for Windows, Macs, Linux, iPhones and Android devices?
> Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
With the tools we have now, it would absolutely not work. In my post I was imagining a parallel alternate universe where native development tools got all the brainpower and innovation over the last 30 years, instead of the web tools getting it.
The web got it because for some _insane_ reason, websites were able to convince IT departments to allow scripts to run.
That left the barn door unlocked. Suddenly the download everything every time (or hope some is at least cached) environment of JavaScript / ECMAScript became the ONE place a user could 'for sure' 'install' (run someone else's unapproved) program.
-
Websites, _really_, should work just fine with zero scripts turned on. Possibly with the exception of a short list of trusted or user approved websites.
As opposed to native apps like the parent poster is proposing with no sandbox and that need to be created for each platform?
As opposed to applications authorized by professionals in charge of equipment.
???
It can work. I have spent several years of my life making it work :) Obviously, on mobile you have app stores including enterprise app stores.
On desktop you don't, or you do but they suck so people often don't want to use them. Making shipping desktop apps as easily as you can ship a web app is the goal of my company [1] and although it's always a work in progress it does get very close to that goal now. In particular:
- Once set up you can deploy with a single command that builds, signs, integrates auto update engines, renders icons, uploads and everything else. It's not harder than uploading a new version of a web app.
- Apps can do synchronous update checks on startup, web style, so once you push a new version everyone will get that version when they next start up the app. You can easily implement version checks whilst the app runs to remind users to restart after a while (big SPAs have the same issue if the server changes whilst user's tabs are still open).
- Apps can update in the background too.
- Delta updates make update times very low once the initial install is done.
- On Windows, apps from different vendors can share files. So if two apps share a runtime and it's stable enough that files don't change totally from version to version, the user won't have to download them. Unfortunately on other platforms this nice trick doesn't work, and in a world where runtimes update every few weeks it's of less value. But still.
All this works behind the firewall, also.
[1] https://hydraulic.dev/
Native development's story has been, for the longest time, the native way or the highway, which is precisely why it failed in favor of web where everyone begrudgingly supports a common feature set. No one wants to implement the same feature in five different native idioms.
And unfortunately, every system had their own way. Native development has suffered for decades from stubborn OS vendors who are incentivized against portability and cross-platform interoperability. Everyone's "way" was different and required you to rewrite your application to support it. I always wonder, if the web never took off as the common platform, would we today have better cross-platform native tools and APIs? I guess we'll never know.
Yes, but in fairness if operating systems don't differ then what's the point in having them.
This is the "what killed [os] innovation" problem: if nobody is writing native apps then there's no incentive to add new capabilities to your OS.
What's really needed here is a platform with the same advantages as the web, but which makes it much easier for OS vendors to expose proprietary capabilities. Then you will get innovation. Once browser makers killed off plugins it was over for desktop operating systems as outside of gaming (where innovation continues apace), there was no way to let devs write 90% cross platform code whilst still using the 10% special sauce.
There have been cross platform API and tools forever and they all suck and you end up not taking advantage of the platform and features.
Java Swing, Electron, QT, React Native, etc.
If you are going to create a “native” app that doesn’t take advantage of the platform features, you might as well just use the web.
Besides, it’s always a leaky abstraction that forces you to have some type of escape hatch to take advantage of subtleties of the platform.
> An OS on an OS, in a document reader
Versus an interpreted language executed in a runtime running on a virtual thread of an OS running on top of a BIOS over a glorified calculator!? Insanity! Whatever happened to good old-fashioned pen and paper!?
There's nothing wrong with the model of delivering your software as a small program to run in a sandboxed browser environment. WASM, canvas, WebGL -- you can do nearly as much on the web as native nowadays, with a dead-simple deployment model. One of the only type of programs that's much harder to make in a web application is malware. Calling a modern browser a "networked document reader" is as silly as calling a modern computer a calculator.
The DOM seems fair to call a networked document reader. You've suggested a different build target for what would have been native apps - I think you and OP meet in the middle a bit; you get the power of non-html app development. OP laments the overhead of having to shove that into the existing web model designed for documents; you appreciate the sandboxing.
I think you have similar opinions that mostly overlap, regardless of insults about statements being silly.
You’re assuming that teaching fleshy monkeys to smear their gunk on glorified bathroom tissue was ever a good idea.
> Native apps can do literally anything.
That's just as much a downside as an upside. You're putting a lot of trust in a native app that you aren't putting in a website.
What about sandboxed native apps? If the browser can do it, why can't native apps do it as well?
It's much harder than it looks. I've investigated all this very deeply and should really write a blog post about it.
A blog post would be awesome, I haven't done a massive deep-dive. (and no pressure if you end up not writing it)
The gist is that native sandboxing is a mess of undocumented APIs, very different approaches between operating systems, one-size-fits-all policies, kernels are full of bugs, the whole setup is a nightmare to debug and to top it off there are no useful cross-platform abstractions. Not even Chrome has one; beyond Mojo the sandbox is a pile of special cases and platform specific code all over the codebase.
We have sandboxing technology on every modern operating system.
In HN spirit / guidelines, I'm going to presume the best.
Did you mean: "the web (as an application platform) will die" / once again swing back from mainframe / thin client to powerful local computing platforms?
In the spirit of empowering the user, I too hope the average user once again owns their destiny, the storage, computation, and control of their data. Though I think the web as a publishing media does empower that user if there are open platforms that promote the ability to choose any fulfillment partner they desire.
The web is just a convention that gained rapid adoption so now browsers dominate software. As far as conventions go, it is not bad compared to some of the stuff humans have landed on. Better than paving over everything so we can drive and park cars all over, better than everything being single use and disposable. Web has it's ups and downs but it is decent based on our track record.
I am exploring an alternative browser-like platform concept that would allow for near native performance. However established web protocols are hard to overcome.
Same here, although most of my UI work is on the Web nowadays, I miss the days when the browser was only for hypertext documents, and everything else was done natively with networking protocols, and standards.
Hence why my hobby coding has nothing to do with Web technologies, Web pays the bills, other stuff is for fun.
Stagnation in viz design has pretty much nothing to do with the shrinking native<->web capability gap, and the web is here to stay.
>>Native apps can do literally anything
like hack your banking account or steal your password...
Even a small amount of data literacy makes you aware that visualizations can deceive. Pie charts make humans overestimate large percentages, nonzero axis is borderline fraud, choice of colors can totally warp color scales.
I think that in this context it is expected for data literacy to make people suspicious of complex visualizations.
Data literacy should come down to the data itself, not only the visualization of those data. Sure pie charts are the bane of Tufte’s existence but even the best data visualizations of a particular segment of data can be misleading due to misrepresentation of the data underneath from collection to its analysis.
People should be far more skeptical of what they are fed. Data narratives are often misleading with manipulation of the data, its aggregation, visualization, and especially the interpretation within context. Data literacy needs to address all of these, not simply the how it’s visualized; that’s simply the final step in the entire data and information lifecycle.
I’m not saying “do your own research;” instead, folks should think critically about what they’re seeing and attempt to understand what’s presented and put it inside the appropriate context before taking anything at face value that they’re shown, by any organization.
e: just formatting
> nonzero axis is borderline fraud
This is an outrageously reductive meme that has long outstripped its actual usefulness and needs to die. The axis and scale should represent the useful range of values. For example, if your body temperature in Fahrenheit moves more than 5 degrees in either direction, you're having a medical emergency, but on a graph that starts from zero, this would barely be visible. Plotting body temperature from zero would conceal much more than it reveals, which is the opposite of what dataviz is supposed to do.
The only reasonable zero-value for temperature is 0K, which unfortunately leads to unreadable graphs. (All other temperature scales are completely arbitrary.) So for the specific case of temperatures, it is in fact completely reasonable to have a nonzero axis. But most graphs are not temperatures.
this is a very rare case where nonzero axis is justifiable
nevertheless >99% of cases where I am encountering nonzero axis it is misleading
> The axis and scale should represent the useful range of values
this should not be confused for "range of values present in data"
often actually useful visualization would show that value barely changed - but it makes for more truthful and boring news, so is avoided
Any visualization that represents variation around a baseline value should use the baseline value as its axis, whether the baseline is zero or not.
Pie charts are just as unreadable for medium and small percentages. They encode values as angles. Human perception is not suited to estimating angles relative to each other.
Correct title: What Killed Innovation in the Pretty Diagram field?
I keep seeing books with interesting titles like "The evolution of clothing" and then see a subtitle like "In Wisconsin. Between 1985 and 1986."
"From jean vests to jean jackets"
Just looking at that "512 paths to the white house graphic", and I'd argue that it's more confusing than useful. Why is Florida at the top? Consider the point where it's "Obama has 255 ways" and "Romney has 1 way". What's the point of the massive arrow to Florida and then taking a very specific route to success? This would only make sense if there is a pre-determined order in which the results must come.
The way it's been done in the past in the UK, for instance, is "A needs X more seats to win, B needs Y more seats to win, Z more seats remain". Simple, clear, and no flashy graphics required.
I know the situation in the US is a bit more complicated with different numbers of representatives per state, but it's still not especially useful to prioritise one state over another in the graphic, because what's important is the relative difference between the totals so far received.
I get that there could be some more presentation towards uncalled results and the expected outcome, but it doesn't look like that graph gives that, which would be far more useful than this thing with arrows.
> Why is Florida at the top?
As you mention, the number of electors per state varies by quite a bit. E.g., in the 2012 election covered by the chart, Florida had 29 electors, Ohio had 18 electors, and North Carolina had 15 electors, which is why those three states appear at the top.
The main important effect is that (with only some small exceptions) if a candidate wins a simple majority of the votes in a state, then they receive all of that state's electors. E.g., if a candidate wins 50.01% of the Florida vote, they get 29 electors, but if they win 49.99% of the vote, they get 0 electors. See: the 2000 election, where the overall outcome depended on a few hundred votes in this way.
This means there's a lot of focus on 'flipping' states one way or the other, since their electoral votes all come in blocks. What the chart is showing is that if Romney won Florida, he could afford to lose a few other contested states and still win the national election. But if Obama won Florida (as he in fact did), then Romney would need every other state to go his way (very unlikely!) if he still wanted to have a chance.
That is to say, Florida really was extremely important, given the structure of U.S. presidential elections: it would make or break a candidate's whole campaign, regardless of what happened in the rest of the country. And similarly, the remaining states are ordered by decreasing importance.
Of course, while results are being counted, you also see simpler diagrams of the current situation. The classic format is a map of the country with each state colored red or blue depending on which way it flips. This is often accompanied by a horizonal line with a red bar growing from one side, a blue bar growing from the other side, and a line in the middle. But people are interested in which states are more important than others, which creates the imagery of 'paths to win'.
Except, in the extreme example I cited from the article: "Obama has 255 ways" and "Romney has 1 way"
At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
I'd also argue that the point you're making is obscured by this image. There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome. IMHO, how it was actually shown on news programs today is much clearer than what the article is proposing.
> At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
When that diagram was created, the election hadn't even started yet, so they didn't know who would actually win Florida. The 1/255 number was contingent on Romney losing Florida (i.e., it was a hypothetical outcome the user could click through). But when they didn't know yet who would win Florida, it was still 76 ways for Romney and 431 ways for Obama.
Anyway, Florida was very important for the result regardless of the chronological order. Suppose that Florida's result was called first, in favor of Obama. Then one more state for Obama would seal the election in his favor, and everyone could call it a day and go home.
On the other end, suppose that Florida's result was called last, and Obama won at least one state beforehand, but not too many. Then everyone would have to wait for Florida in order to know the overall outcome: its result would be absolutely necessary.
> There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome.
Well, if people really wanted to work out the math by hand, they could look it up. But states are called one at a time, so people are naturally interested in 'subsets of states' that can singlehandedly seal the election one way or the other.
The US news covers the US elections from a really strange angle. They act as though even as the votes are coming in, and there is nothing more the candidates can do to change the outcome, that they are still "looking for a path to victory" and they list all of the "paths to victory" that could be possible. As though we're watching them stumble through a dark forest.
I had the exact same thought here: https://news.ycombinator.com/item?id=43473149
Really bewildering from an epistemic point of view, even if it's "just a metaphor". (And do people really generally understand it to be just that?)
I'm not sure about this. Why do we constantly need new ways of presenting data?
My main concern is that eventually it becomes easy to read and interpret data, especially for people that are not used to it, that are less data- or science-savvy in a way. That it's accessible.
It's good to try to find better ways to present certain cases, but also it's only needed as far as it's useful, and otherwise I feel consistency is way better instead of keeping churning out new ways to look at it that require an effort on the consumer part (no matter how beautiful / well presented this is) to figure out what they want to know out of it.
Innovation for the sake of usefulness is good. Innovation for the sake of innovation feels... definitely not as good (although I wouldn't discard it completely).
Have we already achieved the absolute optimal ways to visualize data? Maybe in some simple cases, yes, but not necessarily in all practical cases.
Should new and better ways to visualize data look drastically different from what we're used to? Maybe, but likely not very often. Revolutionary changes are rare, and incremental improvements are important.
> That was the year I realized I was experiencing scrollytelling fatigue.
She nailed it.
The people who really, really have to look at graphs of numbers all day have a Bloomberg terminal. The graphics are visually unexciting but useful.
Unremarked is that while those examples are visually impressive, they're also unhelpful.
Exactly. I see a lot of graphs and animations that look cool but when you take a closer look, they convey not much information .
Every example in the article suffers from excessive visual complexity and a lack of clarity as to what's being quantified and how values relate to each other.
The best one is the "four ways to slice the budget" visualization, but even that would just have been better as four separate, straightforward charts.
I guess what killed innovation in data visualization is that the innovations were hindering, rather than helping, to accomplish the purpose of building data visualizations.
The economics don't support innovative web visualizations, a slight engagement boost for a day is the return on investment. If you're lucky it goes viral on social media, but there's far cheaper ways to accomplish that (e.g. inflammatory rhetoric).
There was probably a core of 50 people mainly responsible for these (with hundreds of thousands in awed aspiration/inspiration) who've since retired or moved on to other interests or got distracted by politics in the meantime after 2016, or any other similar reason. It was probably Mike Bostock's departure from the scene in 2017 that was the core catalyst.
The point of data presentation is to gist the most salient trends; an interactive chart where you can zoom in to the lowest granularity of the data basically defeats the purpose of the plot in the first place. Similarly most animation in charts doesn't really add any meaningful visual data, it's just distracting. I think most consumers of data journalism got pretty bored of scrolling through some massive viz after only a few minutes, and why would they not? People read the news to have the critical points surfaced for them. They don't want to dig through the data themselves (and if they do they're not going to be satisfied with the prebuilt animation). These kinds of things are IMO more fun and interesting to build, rather than to actually try and learn something from.
When a new technology comes along no one knows what ideas are good and what ideas are bad, so people try a bunch of things and most of them aren't very useful and the few that are become standardized. In the case of UX stuff like visualizations users also learn the grammar of the technology and get used to seeing things done in certain ways, which makes it harder to do things differently.
So basically there's less innovation in data visualization because we mostly figured out how to solve our data visualization problems. If you look at the history of printed visualizations I think you'd find a similar pattern. The only somewhat recent innovation I can think of there is the violin plot, which became possible due to advances in statistics that led to probability distributions becoming more important.
Sometimes something is a "solved" problem. There hasn't been a lot of innovation in say, firearms, because we pretty much figured out the best way to make a gun ~100 years ago and there isn't much to improve.
Not everything needs innovation, and trying to innovate anyway just creates a solution in search of a problem.
They said the same thing about hash tables. Innovation from a single individual blew away (no pun intended) all prior expectations and opened an entirely new baseline understanding of this.
Just because we THINK we’ve solved the problem doesn’t mean coming at it from an entirely different angle and redefining the entire paradigm won’t pay dividends.
Sure, and no one is saying that people should stop experimenting and testing out alternative approaches. But we wouldn't expect to see experimental approaches displacing established conventions in mature use cases unless they actually are major breakthroughs that unambiguously improve the status quo. And in those situations, we'd expect the new innovations to propagate rapidly and quickly integrate into the generally accepted conventions.
But there's obviously going to be something analogous to declining marginal utility when trying to innovate in mature problem spaces. The remaining uncaptured value in the problem space will shrink incrementally with each successive innovation that does solve more of the problem. So the rate at which new innovations propagate into the mainstream will naturally tend to slow, at least until some fundamental change suddenly comes along and modifies the constraints or attainable utility in the context, and the process starts over again.
That's true enough. We don't know what we don't know, and there's always the potential for some groundbreaking idea to shake things up. That's why it's important to fund research, even if that research doesn't have obvious practical applications.
But this sort of innovation comes from having an actual solution that makes tangible improvements. It does not come from someone saying "this technology hasn't changed in years, we need to find some way to innovate!" That sort of thinking is how you get stuff like Hyperloop or other boondoggles that suck up a lot of investments without solving any problems.
What's the history here?
https://www.quantamagazine.org/undergraduate-upends-a-40-yea...
Thanks!
I hadn't realised it was something so recent.
My irrational side really laments where many parts of modern life are in this process and how...standardised things have become. When I look at e.g. old camera designs, they are so much more exciting to see evolve, and offer so many cool variations on "box with a hole and a light sensitive surface in it". Seeing how they experimented and worked out different ways to make an image-making machine with that requirement, I feel like I'm missing out on a period of discovery and interesting development that now is at a well-optimised but comparatively homogenous dead end.
Maturation killed what is considered innovation in this article. Many if not most of the visually impressive but didactically confusing innovations fell by the wayside. We're currently in a new wave of 'innovation' with LLM-generated summaries and 'helpful' suggestions being added here there and everywhere. Most of those will disappear as well once t becomes clear they do not add real value or once terminal devices - browsers etc - have such functionality built-in using locally executed models which are trained on user preferences (which will be extremely enticing targets for data harvesting so they better be well-protected against intruders).
In this field? The answer is easy. Data, even pretty data -- maybe ESPECIALLY pretty data -- is not "information," and especially not "wisdom."
At the risk of using an odd term -- it's like -- "Data porn?"
The wave of innovation starts with "ooh shiny new thing" and ends with camps of made up minds. In the case of data visualization, you have the no frills analyzers in one camp who only see visuals as distractions at best and sleight of hand at worst, and short attention span info-tainment consumers in the other camp that are not only easy to please but may even find your overly elaborate data driven stories annoying. What remains is a vanishingly small venn diagram of data-savvy readers and practitioners.
Steve Jobs discussed "Content vs Process" years ago:
https://youtu.be/TRZAJY23xio?feature=shared&t=1770
While I didn't agree with a lot of his ideas, this one has proven true over time.
If you meant innovation in a scientific/startup context, than the reasons are as follows:
1. Space: Rent seeking economies create commodities out of physical locations used to build high risk apparatus. Even university campus space is often under synthetic scarcity.
2. Time: Proportion of bureaucratic and financial investment pressure constrain creative resources actually used to solve some challenge.
3. Resources: Competition and entrenched manufacturing capacity asymmetry. Unless one can afford to play the Patent game... people will simply steal/clone your work to fragment the market as quickly as possible. Thus, paradoxically degrading technology markets before it may mature properly through refinement.
4. Resolve: Individuals focused on racketeering and tying as a business model generally case harm to the entire industry through naive attempts at a monopoly.
5. Respect: Smart people do not choose to be hapless, and simply vote with their feet when their communities are not longer symbiotic.
There are shelves full of technology the public won't see for years, as there is little incentive to help rip off consumers with Robber Baron economics. This is why "we" can't have nice things... and some people are 10 years into the future. =3
This basically sums it up:
“Some information will always be best conveyed in a straightforward bar or line chart, particularly for audiences that don’t have time to engage deeply"
Good ones are expensive to create and turns out there isn’t that much money in it. It wasn’t clear this was the case early on when HTML5 came out and really enabled these experiences. But after you make a few and realize how much goes into creating them, and how hard it is to extract value from them, it doesn’t make that much sense.
Also, that us election needle from 2016 really turned a lot of people off of to the whole genre, I think.
The assumption that data visualization innovation is declining needs more evidence. Regardless the article asserts it’s true then contorts itself arguing for it.
As others have said - the “patterns” have been invented & some pretty good stuff is now out there. For example:
- https://acko.net/tv/toolsforthought/
- https://ciechanow.ski/mechanical-watch/
- https://ciechanow.ski/bicycle/
- many science & data exploration youtube channels
There’s less need to create completely new techniques after the Cambrian Explosion is over, we already have a baseline of patterns & can draw from them & it’s less work. That doesn’t look as much like innovation but has plenty of value.
Even cooler: thanks to dataviz folks past and present, plenty of visualization libraries exist, almost all are open source, and they work spectacularly well. Give your data to any reasonably new reasoning LLM; ask it something like “write some code to visualize the story that’s in this data”, and prepare to be wowed at what it can produce.
Some examples:
- https://x.com/omarsar0/status/1894164720862523651
- https://x.com/christiancooper/status/1881345352235954322
- https://reddit.com/r/ClaudeAI/comments/1ja1yal/claude_37_son...
The takeaway is be careful assuming something went away. Maybe new stuff is still happening but is a minority of the overall stuff that exists.
There’s great data viz out there, none of which has gone away & all of which can be used as a starting point without inventing from scratch. The “invention rate” declining is part of the cycle.
<3 Shirley
As someone who contributed to some of those waves in terms of core tech, got various niche recognitions for it, and collaborated+employed folks here, I don't think it died but continues to evolve. (The Graphistry team helped with GPU dataframe / viz movement, apache arrow, etc, and pay our bills with folks needing to answer tricky graph questions such as on linking events or people.)
Much of the shift IMO is about who & goal, and quite exciting: gen AI is unleashing a lot of old & new ideas.
Jupyter, then streamlit, got big. In parallel, ML and now AI. So less on bespoke viz on large data, often with superficial levels of analytics. Now more on ML & AI pipelines to both understand more, and in the generative era, do more. Amazingly, you don't need to be a js wunderkind either, almost anyone with a database can do it.
Nowadays as our team is launching Louie.ai (genAI notebooks, dashboards, & pipelines), it is in part using gen AI to make all our years of GPU/graph/ai tech easy for more of our users. But the best viz is no viz, and the second best is just an answer or a list, and with genAI, one you can ask questions to. So viz is not a goal but a means, and part of a broader process like investigation and automation. We are having to start over a LOT, and so it is not dead, but changed.
Funny enough, one of the last public talks I gave before starting Graphistry all those years ago was at Strangeloop (10 years ago?) where I was describing program synthesis as the future of visualization, and demoing prompt engineering with SAT-solver-era tools. However, that wasn't powerful enough back then for what BI users needed in practice, so the program synthesis leg of our vision just wasn't practical until gpt-3.5/gpt-4 came out 2 years ago. As soon as that happened, we resumed. Building louie.ai has been incredibly stimulating and exciting now that we can finally do it all!
I have to make presentations on my teams status to boards and alike.
No matter how cool solution my team uses and presents, it will be rejected.
It will be rejected, because only static reports in two forms are accepted by the readers - PowerPoint slides or a PDF.
I would love to get away from pie charts, line, and bar graphs. Alas, I am mostly stuck in this.
Any suggestions? I have to show present and over time things.
> I would love to get away from pie charts, line, and bar graphs.
Why?
Aesthetics - boring, flat, blah.
Burn out - most will pass right over it, spend 10 minutes asking about the details, despite the answers in the chart.
MBAs and MVP (Minimum Viable Product)
No doubt due to structural tax changes that changed R&D into OpEx
The question I always ask clients is "What's your hypothesis for this being viable?" Many are shocked that the 'V' part exists.
These are an innovative way to sell New York Times subscriptions, but most of us aren't making charts for interactive marketing.
These sorts of animations are cool, but my experience has been that if you have to deal with them daily, you'll want a way to turn them off. People will jam a bunch of charts with considerable data onto the page, and the animations will barely work due to the strained performance.
Perhaps it's just that data visualization has simply matured, and the field has converged on a narrow set of designs that are proven to work? The earlier, "experimental" examples given by the author are indeed beautiful, but I'm not really sure all the fancy animations help me grasp the underlying data.
Simple, Large Corporations paying politicians to enact restrictive laws to favor their business model. If that fails, one of the following:
* Patent lawsuits
* Trivial lawsuits
* Purchase the small company and eliminate it
* Hire the small company's employees
* Bribe regulators and lawmakers to make it difficult for small players to enter the market or even exist
> So what next?
LLM discussion of visualizations?
I did guerrilla usability testing around teaching scale, which included this video[1] (stop-motion animation using CO molecules). Lots of people asked "What are those ripples?". IBM even had a supplementary webpage addressing this (which I no longer see, even on archive). People could easily ask this with me standing beside them, but not so much if viewing the content online. Which raised the UI question of how to encourage such.
With LLMs, perhaps people will be able to ask questions of a visualization? What is that? Why is that? What about ...? I don't understand ... Does this mean ...?
[1] IBM's A boy and his atom https://www.youtube.com/watch?v=oSCX78-8-q0 Making of: https://www.youtube.com/watch?v=xA4QWwaweWA
Innovation? We didn't even exploit Atom netbooks in the same way the game developers did with the Game Boy iterations creating something astounding, such as the Cannon Fodder port for the GBC.
Make Luakit (anything WebkitGTK4) usable under a GB of RAM and some n270 CPU and then we'll talk. No, I am not talking about WebGL games or WebGL earth. Simple JS websites with text and maybe some video. Make the <video> tag playing as fast as MPV does with my custom config being able to play 720p/30FPS videos perfectly fine.
OTOH, sites from http://wiby.me and http://lite.cnn.com work extremely fast. On gopher, gopher://magical.fish it's like having a supercomputer to read news and access some services. Even gopherpedia and weather, too.
It shouldn't be that difficult. A Pentium 4 does SSE2. Atoms did SSE3. FFS, web browsing under an SSE2 machine was crazy fast and it ran older Mozilla/Gecko based browsers like butter. It was the minimum to run Retrozilla/Firefox 52, nothing fancy.
well, Mike Bostock left to create Observable, and in its latest iteration Observable Plot https://observablehq.com/plot/ is amazing.
This makes the old data viz examples from NYT accessible to the rest of the population who aren't D3.js / canvas / svg whisperers like Mike
The loss of function in favor of form. Examples include needlessly complex data visualizations and vague clickbaity titles.
From https://worksinprogress.co/issue/how-madrid-built-its-metro-... contrast Madrid's train stations, copy and pasted as much as possible, vs London's, where each has its own interesting and complex architecture. The article claims simple, consistent Madrid stations were easier to build. Idk if it's true or not, but it's an appealing argument that architects are interested in architectural uniqueness and complexity which adds costs.
Similarly, the data viz architect sort of assumes more complex visualizations would be helpful if the public wasn't so phone addicted and inattentive: "some data stories are just too complex and nuanced to be aggregated in bar charts and line charts. Surely, they’d be curious about more complex visualizations" ...well where's the discussion of if visualization complexity is actually good for data stories? If everyone knows how to read a bar chart and a line graph, that's a point in favor of standard copy and paste visualization.
The one case where imo fancy visualizations actually help is maps. Not only is the example of facebook friendships in 2010 cool, a world map really is the best way to show where people/connections are (ofc maybe it's just a heatmap ie https://xkcd.com/1138/ idk if they divided by how many people live there but still cool). So there are probably lots of stories a map visualization helps tell by showing where stuff is.
But yeah in general I felt like the article spoke to data viz only as art to wow data connoisseurs. There was no defense of why complex stories getting a new data format each time conveys information better than standard bar chart/line graph.
Part 1 of this post https://www.shirleywu.studio/notebook/2025-02-client-comfort... probably speaks more to why it's good to do visualizations more creative than people are used to. I still get the sense of the architect who has more interest in complexity than the client (who's both uninterested and disinterested) though.
> What Killed Innovation?
regulations and taxes
its more like its peaked, like there so many innovation you can add to 2d canvas
Quarterly profit over long term growth
Jack Welch.
Great answer. One of the most destructive people of the 20th Century.
Is it just me, or is this "paths to victory" metaphor for presidential elections commonly used by US media a pretty strange way to "narrate" an election outcome, all things considered?
The outcome is effectively fixed, although unknown, once the last polling stations close, so what's with all the evocations of physical phenomena ("blue wall", "battleground states" etc.) when everything that's still happening is a gradual discovery of a completely determined outcome?
It's a way to make sense of chaos and generate a narrative to follow, also how millions of individual votes get reduced into large scale trends (gen alpha men want X, Boomer women living in suburbs want Y, etc).
People are invested in the outcome and want to know if things are headed in the direction they desire.
I think the parent is trying to point out that it's not "are headed", it's "were decided, over 5 hours ago".
Low interest rates.
[dead]