Hacker News new | past | comments | ask | show | jobs | submit login
JSX over the Wire (overreacted.io)
203 points by danabramov 17 hours ago | hide | past | favorite | 131 comments





This was a really compelling article Dan, and I say that as a long time l advocate of "traditional" server side rendering like Rails of old.

I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library

https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally

it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.

Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...


There are a couple of "red flag" quips that if I hear them coming out of my mouth (or feel the urge to do so), I have to do a quick double take and reconsider my stance. "Everything old is new again" is one of them — usually, that means I'm missing some of the progress that has happened in the meantime.

its absolutely ridiculous and sad the level of responses failing basic comprehension and this is a topic i happen to know well... makes you wonder how much to trust the avg hn comment where i am NOT knowledgeable...

The big challenge with the approach not touched on in the post is version skew. During a deploy you'll have some new clients talk to old servers and some old clients talk to new servers. The ViewModel is a minimal representation of the data and you can constrain it with backwards compatibility guarantees (ex. Protos or Thrift), while the UI component JSON and their associated JS must be compatible with the running client.

Vercel fixes this for a fee: https://vercel.com/docs/skew-protection

I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.


Wouldn't this be easy to fix by injecting a a version number field in every JSON payload and if the expected version doesn't match the received one, just force a redirect/reload?

Forcing a reload is a regression compared to the "standard" method proposed at the start of the article. If you have a REST API that requests attributes about a model, and the client is responsible for the presentation of that model, then it is much easier to support outdated clients (perhaps outdated by weeks or months, in the case of mobile apps) without interruption, because their pre-existing logic continues to work

Thrashing is why

Random JSX nugget:

JSX is a descendant of a PHP extention called XHP [1] [2]

[1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...

[2] https://www.facebook.com/notes/10158791323777200/


Internally at Facebook you could also just call React components from XHP. Not very relevant on what you see on Facebook now as a user, but in older internal tools built with XHP it made it very easy to just throw in React components.

When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!

Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `"__dr": "GroupsCometHighlightStoryAlbumAttachmentStyle.react"`. I never looked into the mechanics of how these worked.


I'm annoyed to learn that even the original PHP version had `class=` working.

In fairness, `className` makes a lot of sense given that the native DOM uses the `className` attribute rather than `class`. In that sense, it's a consistent choice, just a consistent choice with the DOM rather than with HTML.

The bigger issue is the changes to events and how they get fired, some of which make sense, others of which just break people's expectations of how Javascript should work when they move to non-React projects.


Preact fixed that years ago and you can just use class=

I always come back to the idea that I want to render HTML where the state lives rather than shipping both a rendering engine and all the necessary data to a client.

In most cases that means rendering HTML on the server, where most of the data lives, and using a handful of small components in the frontend for state that never goes to the backend.


This article doesn't mention "event handlers" a single time. Even if you get past the client and server getting out of sync and addressing each component by a unique id that's stable between deploys (unless it's been updated), this article doesn't show how you might make any of these components interactive. You can't add an onClick on the server. The best I can figure, you pass these in with a context?

Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).


It's not really the scope of the article, but what about adding a client directive [0] and dropping in your event handler? Just like that, you're back in a familiar CSR React world, like in the "old" days.

[0] https://react.dev/reference/rsc/use-client


you put interactivity in client components, that seemed pretty clear to me

Hey, thanks for sharing "JSX Over the Wire"! As the creator of ts-liveview, I’m thrilled to see Dan’s ideas on server-side JSX rendering and minimal client updates—they mesh so well with my work.

ts-liveview is a TypeScript framework I built (grab it as a starter project on GitHub[1]) for real-time, server-rendered apps. It uses JSX/TSX to render HTML server-side and, in WebSocket mode, updates the DOM by targeting specific CSS selectors (document.querySelector) over WebSockets or HTTP/2 streaming. This keeps client-side JavaScript light, delivering fast, SEO-friendly pages and reactive UIs, much like Dan’s “JSX over the wire” vision.

What’s your take on this server-driven approach? Could it shake up how we build apps compared to heavy client-side frameworks? Curious if you’ve tried ts-liveview yet—it’s been a fun project to dig into these ideas!

[1] https://github.com/beenotung/ts-liveview


I like this article a lot more than the previous one; not because of length.

In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.

The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.

My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?

I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".


Sounds like another one of Dan's talks, "React from Another Dimension", where he imagines a world in which server-side React came first and then extracted client functionality:

- https://www.youtube.com/watch?v=zMf_xeGPn6s


Great talk, thanks for reminding me about this Mark!

Really like this pattern, it’s a new location of the curve of “how much rendering do you give the client”. In the described architecture, JSX-as-JSON provides versatility once you’ve already shipped all the behavior to the client (a bunch of React components in a static JS that can be cached, the React Native example really demonstrated this well)

One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.

Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.

Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.

If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.

If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.

And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.

But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!


Excellent read! This is the first time I feel like I finally have a good handle on the "what" & "why" of RSCs.

It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.

The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.

It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:

1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?

(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))

Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.

Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.

[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>


I have used RSCs only in Next.js, but to answer your questions:

1./2.: You can update it optimistically. [0]

3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]

4.: In the case of the like button, it would be a "form button" [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.

5.: You block the double request with useTransition [5] to disable the button.

6.: In Next, you would invalidate the cache and would see your like and the like of the other user.

[0] https://react.dev/reference/react/useOptimistic

[1] https://nextjs.org/docs/app/api-reference/functions/revalida...

[2] https://nextjs.org/docs/app/api-reference/directives/use-cac...

[3] https://www.robinwieruch.de/react-form-button/

[4] https://www.robinwieruch.de/react-form-loading-pending-actio...

[5] https://react.dev/reference/react/useTransition


React offers a useOptimistic Hook that is designed for client-side optimistic updates and automatically handles reverting the update upon failure, etc: https://react.dev/reference/react/useOptimistic

> REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.

So, let's assume the alternative universe, where we did not mess up and got REST wrong.

There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.

What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?


And this is exactly what we get with htmx.

Is Dan reinventing Astro?

The biggest draw that pulled me to Astro early on was the fact that it uses JSX for a, in my opinion, better server side templating system.


Dan, I've been reading some of your posts and watching some of your talks since Redux, and I really love how passionate you are about this stuff. I think the frontend world is lucky to have someone like you who spends a lot of time thinking about these things enthusiastically.

Anyway, it's hard to deny that React dev nowadays is an ugly mess. Have you given any thought to what a next-gen framework might look like (I'm sure you have)?


Dan Abramov (author) also recently wrote a related post, React for Two Computers.

https://overreacted.io/react-for-two-computers/ https://news.ycombinator.com/item?id=43631004 (66 points, 6 days ago, 54 comments)


What happened to the very elegant GraphQL? Where the client _declares_ its data needs, and _that's all_, all the rest is taken care by the framework?

Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL


> the very elegant GraphQL

The GraphQL which ‘elegantly’ returns a 200 on errors? The GraphQL which ‘elegantly’ encodes idempotent reads as mutating POSTS? The GraphQL which ‘elegantly’ creates its own ad hoc JSON-but-not-JSON language?

The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: ‘your client should be able to request all data for a specific screen at once.’ Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.

The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as ‘essentially it’s just JSON endpoints that return UI trees’: in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.


N+1, security, authorisation, performance, caching, schema stitching..

N+1 is a solved problem at the framework level If GraphQL actually affects your performance, congratulations, your application is EXTREMELY popular, more so than Facebook, and they use graphql. There are also persisted queries etc.

Not sure about caching, if anything, graphql offers a more granular level of caching so it can be reused even more?

The only issue I see with graphql is the tooling makes it much harder to get it started on a new project, but the recent projects such as gql.tada makes it much easier, though still could be easier.


I have been out of touch with the GraphQL ecosystem for a while. What are the status quo solutions to the problems stated above?

N+1 I just remember the dataloader https://github.com/graphql/dataloader Is it still used?

What about the other things? I remember that Stitching and E2E type safety, for example, were pretty brittle in 2018.


We use the dataloader pattern (albeit an in-house Golang implementation) and it has solved all our N+1 problems.

E2E type safety in our case is handled by Typescript code generation. It works very well. I also happen to have to work in a NextJS codebase, which is the worst piece of technology I have ever had the displeasure of working with, and I don't really see any meaningful difference on a day to day basis between the type sharing in the NextJS codebase (where server/client is a very fuzzy boundary) and the other code base that just uses code generation and is a client only SPA.

For stitching we use Nautilus and I've never observed any issues with it. We had one outage because of some description that was updated in some dependency and that sucked but for the most part it just works. Our usage is probably relatively simple though.


Thanks, really appreciate your reply!

That's a backend issue I guess ...

Couldn't you have both?

I assumed RSC was more concerned with which end did the rendering, and GraphQL with how to fetch just the right data in one request


I was just going to say, all of this has been solved with graphql, elegantly.

Very well written (as expected) argument for RSC. It's interesting to see the parallels with Inertia.js.

(a bit sad to see all the commenters that clearly haven't read the article though)


I was immediately thinking of inertia.js.

Inertia is "dumb" in that a component can't request data, but must rely on that the API knows which data it needs.

RSC is "smarter", but also to it's detriment in my opinion. I have yet to see a "clean" Next project using RSC. Developers end up confused about which components should be what (and that some can be both), and "use client" becomes a crutch of sorts, making the projects messy.

Ultimately I think most projects would be better off with Inertia's (BFF) model, because of its simplicity.


inertia is the 'pragmatic' way. your controller endpoints in your backend - just pass the right amount of data to your inertia view.

& every interaction is server driven.


Everything old is new again, and I'm not even that old to know that you can return HTML fragments from AJAX call. But this is worse from any architectural point view. Why?

The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.

The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.

With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.


It feels like you haven't read the article and commented on the title.

>The old way was to return HTML fragments and add them to the DOM.

Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp

>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.

I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.


From the article:

  Replacing innerHTML wasn’t working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it’s possible to run an XHP-style “tags render other tags” paradigm directly on the client computer without losing state between the re-renders.
HTML is still a document format, and while there's a lot of features added to browsers over the year, we still have this as the core of any web page. It's always a given that state don't survive renders. In desktop software, the process is alive while the UI is shown, so that's great for having state, but web pages started as documents, and the API reflects that. So saying that it's an issue, it's the same as saying a fork is not great for cutting.

React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.

But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.


>I don't see the advantages of having JSX representation of JSON objects on the server side.

That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).

The change-up happens at this point in the article: https://overreacted.io/jsx-over-the-wire/#viewmodels-revisit...

If you're interested in the "final" code, it's here: https://overreacted.io/jsx-over-the-wire/#final-code-slightl....

It blends the previous "JSON-building" into components.


I'm pointing out that this particular pattern (Server Components) is engendering more complexity than necessary.

If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.

HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.


>you shouldn't use ViewModels as that will ties your backend API to the client.

It doesn’t because you can do this as a layer in front of the backend, as argued here: https://overreacted.io/jsx-over-the-wire/#backend-for-fronte...

Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.

As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.


With morphdom (or one day native DOM diffing), wouldn't HTMX fufill 80% of your wishlist?

I personally find HTMX pairs well with web components for client components since their lifecycle runs automatically when they get added to the DOM.


What if the internal state of the web component has changed?

Wouldn't an HTMX update stomp over it and reset the component to its initial state?


to be fair this post is enormous. if i were to try and print it on 8.5x11 it comes out to 71 pages

I mean sure but not commenting is always an option. I don't really understand the impulse to argue with a position not expressed in the text.

it happens because people really want to participate in the conversation, and that participation is more important to them than making a meaningful point.

Maybe add a TLDR section?

I don't think it would do justice to the article. If I could write a good tldr, I wouldn't need to write a long article in the first place. I don't think it's important to optimize the article for a Hacker News discussion.

That said, I did include recaps of the three major sections at their end:

- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...

- https://overreacted.io/jsx-over-the-wire/#recap-components-a...

- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...


Look, it's your article Dan, but it would be in your best interest to provide a tldr with the general points. It would help so that people don't misjudge your article (this has already happened). It could maybe make the article more interesting to people that initially discarded reading something so long too. And providing some kind of initial framework might help following along the article too for those that are actually reading it.

Feed it to a LLM and let it give you the gist :)

The 3 tl;dr he just linked seem fine.

the fact that he needed to link to those in a HN comment proves my point...

it really doesn't. stop trying to dumb him down for your personal tastes. he's much better at this than the rest of us

> stop trying to dumb him down for your personal tastes

That's unfair.

If anything you're the one dumbing down what I wrote for your personal taste.


> he's much better at this than the rest of us

That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).


Yet because of that the issue they were concerned about was shown to the thread readers without having to read 75 pages of text.

Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.


> The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs.

RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.

> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.

JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:

  element = {
    // This tag allows us to uniquely identify this as a React Element
    $$typeof: REACT_ELEMENT_TYPE,

    // Built-in properties that belong on the element
    type,
    key,
    ref,

    props,
  };
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.

[1] https://github.com/facebook/react/blob/e71d4205aed6c41b88e36...


> The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client.

I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.

In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.


> where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML.

> all these systems that did HTML injection over AJAX were tightly coupled

That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.


> Everything old is new again

An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.

https://knockoutjs.com/

Btw, I wouldn't hop back, but better hop forward, like with Datastar that was on HN the other day: https://news.ycombinator.com/item?id=43655914


Knockout was a huge leap in developer experience at the time. It's worth noting that Ryan Carniato, the creator of SolidJS, was a huge fan of Knockout. It's a major influence of SolidJS.

I was a big fan of knockoutjs back in the day! An app I built with it is still in use today.

Very well written. It is rare to see these kinds of high quality articles these days.

Thanks!

I feel the article could have ended after Step 1. It makes the point that you don’t have to follow REST and can build your own session-dependent API endpoints, and use them to fetch data from a component.

I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).

One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.

My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.


The point of doing a server-side render follows from two other ideas:

* that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load

* that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.

The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.


BFF is in practice a pain in the ass, it is large enterprise like Google's compromise but many ppl are trying to follow what Google does without Google's problem scope and well-developed infra.

Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.


This feels a lot like https://inertiajs.com/ which I've really been enjoying using recently

This. We started using it with Rails and it’s been great.

I do like scrappy rails views that can be assembled fast - but the React views our FE dev is putting on top of existing rails controllers have a much better UX.


I am a huge fan of Inertia. I always felt limited by Blade but drained by the complexity of SPAs. Inertia makes using React/Vue feel as simple as old-school Laravel app. Long live the monolith.

Yeah, there is quite a bit of overlap!

IMO this feels like Preact "render to string" with Express, though I might be oversimplifying things, and granted it wouldn't have all the niceties that React offers.

Feels like HTMX, feels like we've come full circle.


In my checklist (https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...), that would satisfy only (2), (3) if it supports async/await in components, and (4). It would not satisfy (1) or (5) because then you'd have to hydrate the components on the client, which you wouldn't be able to do with Preact if they had server-only logic.

Thanks for the reply Dan. That was a great write up, if I might add.

And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.


RSC is indeed very cool. It also serves as a superior serialization format compared to JSON. For example, it can roundtrip basic types such as `Date` and `Map` with no extra effort.

One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.

(Self-promotion) I prototyped an API for consuming such components in an idiomatic way: https://github.com/jonathanhefner/next-remote-components. You can see a demo: https://next-remote-components.vercel.app/.

To prove the idea is viable beyond Next.js, I also ported it to the Waku framework (https://github.com/jonathanhefner/twofold-remote-components) and the Twofold framework (https://github.com/jonathanhefner/twofold-remote-components).

I would love to see something like it integrated into React proper.


Just use Django/HTMX, Rails/Hotwire, or Laravel/Livewire

LiveView is the OG and absolutely smokes those in terms of performance (and DX), but ecosystem is lacking. Anyways, I’d rather use full stack React/Typescript over slow and untyped Rails or Python and their inferior ORMs.

Phoenix/Liveviews

Fresh/Partials

Astro/HTMX with Partials


IMO:

1: APIs should return JSON because endpoints do often get reused throughout an application.

2: it really is super easy to get the JSON into client side HTML with JSX

3: APIs should not return everything needed for a component, APIs should return one thing only. Makes back and front end more simple and flexible and honestly who cares about the extra network requests


The X in JSX stands for HTMX.

Yes

unfathomably based

I can't help but read this in a baritone blustering-with-spittle transatlantic voice.

> Since XHP executes on a server that emits HTML, the most that you can do relatively seamlessly is to replace parts of an existing markup with the newly generated HTML markup from the server by updating innerHTML of some DOM node.

It’s a very long post so maybe I missed it, but does Dan ever address morphdom and its descendants? I feel like that’s a very relevant point in the design space explored in the article.


This all seems to be relatively simple concepts for an experienced programmer to understand, but it's being communicated in a very complex way due to the React world of JSX and Components

What if we just talked about it only in terms of simple data structures and function composition?


isn’t this same thing as graphql?

The main thing that confuses me is that this seems to be PHP implemented in React...and talks about how to render the first page without a waterfall and all that makes sense, but the main issue with PHP was that reactivity was much harder. I didnt see / I dont understand how this deals with that.

When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?

On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?


>When you have a post with a like button and the user presses the like button, how do the like button props update?

Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.

Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.

Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.

In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.


Another great post!

I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.

Maybe OP could clear up - Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work? - prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here? - "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?

--

Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.


A second angle from the same team?

Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.


I knew this post would eventually peddle me nextJS, and it did!

The framework checklist[1] makes me think of Fulcro: https://fulcro.fulcrologic.com/. To a first approximation you could think of it like defining a GraphQL query alongside each of your UI components. When you load data for one component (e.g. a top-level page component), it combines its own query with the queries from its children UI components.

[1] https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...


Yes, another case of old school web dev making a comeback. “HTML over the wire” is basically server-rendered templates (php, erb, ejs, jinja), sent asynchronously as structured data and interpreted by React to render the component tree.

What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.

[1] https://hotwired.dev


>What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns.

Right, that's why it's in the post: https://overreacted.io/jsx-over-the-wire/#async-xhp

Likewise with CGI: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi

Agree there's echoes of "old" in "new" but there are also distinct new things too :)


Right? Right. I had similar thoughts (API that's the parent of the view? You mean a controller?), and quit very early into the post. Didn't realize it was Dan Abramov, or I might've at least skimmed the 70% and 99% marks, but there's no going back now.

Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?


Really appreciate the quality you put into expressing these things. It was nice just to see a well laid-out justification of how trying to tie a frontend to a backend can get messy quickly. I'm definitely going to remember the "ungrounded abstraction" as a useful concept here.

I skimmed over this and imho it would be better to cut like 30% of the exposition and split it up into a series of articles tackling each style separately. Just my 2c.

I'm hoping someone will do something like that. I try to write with the audience of writers in mind.

There is a part of my brain that is intrigued by React Server Components. I kinda get it.

And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.

When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?


I think the rollout is a bit messy (especially because it wasn't introduced as a new thing but kind of replaced an already highly used but different thing). There are pros and cons to that kind of rollout. The tooling is also yet to mature. And we're still figuring out how to educate people on it.

That said, I also think the basic concepts or RSC itself (not "rendering modes" which are a Next thing) are very simple and "up there" with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.


I've represented JSX/the component hierarchy as JSON for CMS composition of React components. If you think of props as CMS inputs and children as nesting components then all the CMS/backend has to do is return the JSON representation and the frontend only needs to loop over it with React.createElement().

It reminds me of when I sent HTML back from my Java Servlets.

It's exciting to see server side rendering come back around.


I believe there is a project (not sure if it’s active) called JSX2 that treated this as exact problem as a first class concern. It was pretty fast too. Emulated the React API for the time quite well. This was 4-5 years ago at least

Step by step coming back go JSF.

Or back to its PHP roots.


or webforms, I hate it.

We don't have to go crazy. Let's just meet at MVC and call it a day, deal?

Or you can have your "backend for frontend"... on the frontend, so you don't have an additional layer, it's always written in the frontend language and it's always synced to the frontend needs. The lengths we go to reinvent the squared wheel.

Deja vu with this blog. Another overengineered abstraction recreating things that already exist.

Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.

Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.

Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.


>Another overengineered abstraction recreating things that already exist.

This section is for you: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi

>Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.

Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its "textbook" interpretation that nobody refers to except in these arguments.


> This section is for you: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi

Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.

> nobody refers to except in these arguments.

Be the change, maybe? People use REST like this because people write articles like this which uses REST this way.


>Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.

I wasn't trying to strawman it--I was genuinely trying to show the historical progression. The snark was intended for the likely HN commenter who'd say this without reading, but the rest of the exploration is sincere. I tried to do it justice but lmk if I missed the mark.

>Be the change, maybe?

That's what I'm trying to do :-) This article is an argument for hypermedia as the API. See the shape of response here: https://overreacted.io/jsx-over-the-wire/#the-data-always-fl...

I think I've sufficiently motivated why that response isn't HTML originally; however, it can be turned into HTML which is also mentioned in the article.


The hypermedia constraint is crazy magic itself. It's not like HATEOAS is fewer steps on the application and server side.

We already have a way one way to render things on the browser, everyone. Wrap it up, there's definitely no more to explore here.

And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?

/s

Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.


I feel like this is the kind of post I would write if I took 2-3x the standard dose of Adderall.

It's the standard dose of Abramov.

This is what happens when I don't write for a few years

Hey, thanks for sharing your thoughts! I appreciate you putting this out there.

One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.


There's a recap for each major section:

- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...

- https://overreacted.io/jsx-over-the-wire/#recap-components-a...

- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...

I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.


From my perspective, the article seems primarily focused on promoting React Server Components, so you could mention that at the very top. If that’s not the case, then a clearer outline of the article’s objectives would help. In technical writing, it’s generally better to make your argument explicit rather than leave it open to reader interpretation or including a "twist" at the end.

An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.

--

1: https://analytic-storytelling.com/scqa-what-is-it-how-does-i...


I appreciate the suggestions but that’s just not how I like to write. There’s plenty of people who do so you might find their writing more enjoyable. I’m hoping some of them will pick something useful in my writing too, which would help it reach a wider audience.


The power is in react context for children to refer to parent state. Rsc completely solved the restful thesis. A rsc returns a spa with streaming data. It also solved the microfrontend architecturally. It is the end game.

Spa developers missed the point totally by reinventing broken abstractions in their frameworks. The mising points is in code over convention. Stop enforcing your own broken convention and let developers use their own abstractions. Things are interpreted at runtime, not compile time. Bundler is for bundling, do not cross its boundary.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: