https://linen.dev logo
Join Slack
Powered by
  • My favorite macOS command
    k

    Kam

    08/13/2023, 2:47 PM
    For macOS developers, efficiency is everything. Enter
    mdfind
    , a command that marries speed with precision. What makes it stand head and shoulders above the rest?

    1. Diving into Mdfind

    mdfind
    is like having Spotlight’s capabilities in the command line. While many search tools painstakingly peruse your filesystem,
    mdfind
    operates on a shortcut. It pulls results from the 'metadata store' or
    mdstore
    - a continuously updated catalog. Imagine pulling a specific card from a neatly organized card catalog versus skimming through an entire bookshelf.

    2. Mdfind vs. Find: The Breakdown

    Here's how
    mdfind
    sizes up against
    find
    : Speed -
    mdfind
    excels with its indexed search, while
    find
    delves deep into the filesystem, making it relatively slower. Scope -
    find
    restricts itself to file attributes like name, size, or date. Meanwhile,
    mdfind
    penetrates metadata, fishing out intricate details such as tags or document content. Flexibility -
    find
    pulls ahead in adaptability, with a diverse range of operators for detailed searches, an aspect where
    mdfind
    plays catch-up.

    3. Mdfind Examples to Kickstart Your Search

    If you're itching to put
    mdfind
    to work, here are some straightforward commands:

    Find All PDFs in Documents Folder

    bash Copy code
    mdfind -onlyin ~/Documents "pdf"

    Locate All Images in Downloads Folder

    bash Copy code
    mdfind -onlyin ~/Downloads "image"

    Search for Files Named 'report'

    bash Copy code
    mdfind -name "report"

    Find a Word, say 'unicorn', within Files

    bash Copy code
    mdfind "unicorn"
    Using
    mdfind
    , file hunting on macOS transforms into a swift, precise adventure. Once you take it for a spin, it's likely to find a permanent spot in your developer toolkit!
    2
  • Linen.dev Front page of your communities
    k

    Kam

    07/25/2023, 6:34 PM

    Intro

    My name is Kam, and I’m the founder of Linen. Linen started with the goal of making Slack/Discord communities Google-searchable. In the past few months we launched our own platform and desktop client. Today, we are deleting our landing page with a feed of interesting threads/conversations from your community. Browse it here! I founded Linen because I noticed so many amazing communities hidden behind walled gardens. Valuable conversations that used to happen in traditional forums were now being obscured. Consequently, more people joined Slack and Discord channels, asking questions that had been answered before. With Linen, we aim to reverse this trend and bring technical communities back into the public eye. Since our launch, we have grown to over 200k monthly visitors within one year, with over 1,000 Slack/Discord/Linen communities. As Linen expanded, we discovered a wealth of insightful content that deserved a spotlight. Instead of wasting precious real-estate on a traditional salesy landing page, we now showcase interesting content that would have otherwise remained hidden forever. Today, we are launching our Linen feed to bring threads together from across communities. How does it work: Any public community on Linen, whether synced with Slack/Discord or native, has a chance of having their conversations showcased on the front page. Since many of these communities are chat-based, we use simple heuristics like thread length and the number of users interacting with the messages to select interesting conversations. Additionally, we will highlight new community creations on the home page of Linen. We plan to implement a ranking algorithm based on upvotes/downvotes in the future. For the first version, we will heavily moderate the content to prevent it from devolving into internet wildness. We are still unsure which algorithm is best suited for our use case, so we anticipate a lot of experimentation and learning. Once you click into a thread, you will land on the thread page of the conversation. There, you can see who else is viewing the thread and engage in real-time discussions with like-minded people. What's next: Beyond a global feed, we aim to introduce the concept of a personal feed. Currently, when someone joins a large chat-based community, it becomes too chaotic, leading them to mute the entire community which defeats the purpose of joining a community. We want to provide the ability to create a personalized feed of conversations across multiple communities and types, including Linen/Slack/Discord/Lemmy/Mastadon. This way, you can stay updated and engage with the conversations in one single place. If this interests you, please give a thumbs-up to this GitHub issue: [Link to GitHub Issue](https://github.com/Linen-dev/linen.dev/issues/1419).
    2
  • Building a Slack/Discord alternative with Tauri/Rust
    k

    Kam

    06/20/2023, 4:57 PM

    Introduction

    My name is Kam. I’m the founder of Linen and today we are launching our Mac and Windows Desktop clients. Linen is a search engine friendly Slack alternative for communities. We started out as a static site render but has evolved in to a fully fledged Slack alternative. One of our core tenants at Linen is that performance shouldn’t be an after thought and when we were choosing a framework to build our desktop client we needed to balance the our desire to build a performant client vs the engineering resources of our small team. We didn’t want to be a bloated Electron app. But being a small team we didn’t have the resources to build a separate desktop client for each platform. Luckily we ended up stumbling upon Tauri, which is a Rust based Electron alternative. This lets us set up desktop client with a single language across multiple platforms while not contributing to the desktop client bloat. We are launching our (beta) Mac/Windows (alpha) Linux desktop clients today as well as wanting to share our experience building on Tauri instead of electron. You can download them here: https://github.com/Linen-dev/desktop-client/releases Mac - Beta Windows - Beta Linux - Alpha

    What is Tauri and how does it compare to Electron?

    Tauri is an open-source electron alternative that is built in Rust. The promise of Tauri is that you can build smaller, more performant and more secure desktop clients. One of the main core differences with Tauri is that it uses a Webview instead of using chromium like in Electron. This means that every desktop application doesn’t have to ship with chromium and can rely on the native browser’s webviews. The downside here is that because it is using Webview you have to deal with different quirks of different operating systems. Additionally Tauri’s ecosystem isn’t as mature as Electron’s which means that you will run in to quirks and have to build more custom code.

    Building the desktop client:

    Setting the ground work:

    Linen is original built with NextJS and heavily relied on server rendering it meant that it was actually quite difficult to get Tauri working with Nextjs. This isn’t a problem with NextJS or Tauri but more that the two is not very compatible if you are using the server-side rendering features of NextJS. We discovered this after trying to build Tauri and realized that it required you to export a static version of NextJS. This is doable but it essentially means that you have to build NextJS as a single page app which defeats the purpose of server rendering of NextJS. Side note: (We would run in to a similar issue with Electron and NextJS as well) To get this working we essentially had to refactor all of our code out from NextJS to a separate package that can be reused between both the NextJS app and a single page app that is built for Tauri. It took us 2 months of engineering hours for this refactor. If you need a desktop client I would recommend building a classic react single page app over using NextJS. If you have the strange requirement of requiring both server rendering as well as single page app (like us) I would recommend you extract most of your components out in to a package that can be reused in both applications, which is ultimate what we ended up doing. The advantage here is that these components are very modular and can be reused in different project/packages. Since our project is a mono repo it didn’t introduce too many issues.

    What was our developer experience with Tauri?

    Tauri was quick to set up and were we able to get a pretty smooth developer experience in the beginning. We were able to build the desktop clients to get started but ran in to a few issues and had to implement some work arounds.
    • TitleBarStyle - For us we wanted the desktop to feel as native as possible and not like a browser. It was difficult to customize the the header to the styling to exactly what we wanted. We ended up having to make the header transparent and implement a custom drag region to make our window draggable. You can see more details
    https://github.com/tauri-apps/tauri/issues/2663
    • Notification lacking callbacks/Listeners - There is no way to know which notification the user clicks in Tauri. There is a few work arounds one is using a third party notification system we haven’t found one we were happy with yet. Tauri uses
    https://github.com/hoodie/notify-rust under the hood but there is no way for the app to know which notification user clicked on for mac and windows which means we don’t know which thread/channel to show. There is an outstanding issue opened here https://github.com/tauri-apps/tauri/issues/3698 the underlying dependency is Notify-Rust. We’ve also opened a bounty for this github issue https://github.com/hoodie/notify-rust/issues/186
    • Drag and drop file uploads - We have multiple text areas for file uploads. Tauri’s doesn’t only provide information about the element that is dragged over. Because we have multiple draggable chat boxes where you can drag files to upload, we don’t know if you are replying to a thread or a channel message. One work around would be to detect the cursor position and select the element based on the draggable area but that would be a fairly ugly hack. We are hoping that this following feature gets released:
    https://github.com/tauri-apps/tauri/issues/3063
    • Linux client - We also ran in to an issue with Linux client’s fonts and emoji’s not building properly. We’re still working through this but wanted to release our Mac and Windows client since that covers over 90% of our users. If anyone has any tips or advice around this feel free to comment on our Github issues.
    https://github.com/Linen-dev/linen.dev/issues/1291 Overall our experience with Tauri has been overall positive but there has been quite a bit of additional work to get even our Mac and Windows client working. There are still a lot more work that needs to be done for to have a great desktop experience so if anyone would like to take a crack at fixing some of these issues check out our repo at https://github.com/linen-dev/linen.dev Our bundle size and our bundle size for the desktop clients:
    • Mac - 4.17 MB
    • Windows - 4.13 MB
    • Linux - 73.8 MB
    If any one from the Tauri or is great at rust team is reading this we would love to see some attention on these github issues: https://github.com/hoodie/notify-rust/issues/186 https://github.com/tauri-apps/tauri/issues/3698 https://github.com/tauri-apps/tauri/issues/3063 https://github.com/tauri-apps/tauri/issues/323 Again you can download the releases: https://github.com/Linen-dev/desktop-client/releases
    4
  • Linen.dev - Switch between chat and forum
    k

    Kam

    06/14/2023, 5:58 PM
    Hi Everyone! Today we are releasing our forum view feature. Linen first launched with a forum like UI for your Slack/Discord community. We are bringing back part of that experience in Linen. The vision of Linen is to bring the benefits of a traditional forums as well as a modern chat experience to your team/community. While chats is great for real time and urgent communication forums have the advantage of a better threading model and organization. Other platforms like Discord has chat and forum view but it still often ends up being a blackhole of information. One difference in our approach is that underlying the forum and message view we are reusing the same data model. This lets us move threads between chat and forum channels. Often times conversations that start as chat conversations that should be a support thread can be moved to the proper place.

    https://i.imgur.com/rpkOrwA.gif▾

    Secondly this lets you switch between a forum and chat channel based on the needs of your community. Since as a community grows chat becomes too noisy and chaotic and a forum view tends to scale better.

    https://i.imgur.com/vDgic41.gif▾

    And finally since Linen does two way syncs between different conversations sources we can even use existing emoji system to turn any existing Slack/Discord (Matrix coming soon) community in to a forum view that is Google searchable. (We still have to work through the emoji mapping for the upvotes though) Beyond the forum and chat views we want to support different “view types” since fundamentally most communication and channels are built on the concept of the threads. We want to give you ability to build very custom workflows and user experiences for your community. This is a very early version of Linen’s forum view and we would love any feedback. There is still so much more to build but since we are a small team we wanted to ship something fast to see if our vision resonates with the larger community. You can checkout our Forum view here: https://www.linen.dev/s/linen/c/blog Best, Kam
    👍 1
    3
  • How we made our chat app search engine friendly
    k

    Kam

    05/08/2023, 5:57 PM
    Linen is a Google-searchable Slack alternative for communities. We originally started Linen because we realized that Slack and Discord was a blackhole of information. There was a lot of relevant content that exists inside of these technical communities that the information is being lost. It has been a bit over a year since Linen has started and we have learned a lot about technical channels of making something search engine friendly and we wanted to share a few of our hurdles and learnings.

    Static rendering

    The first hurdle to making a website search engine friendly is Javascript. The same tool that makes your site realtime and interactive is the same thing that causes problems for search engines. The more javascript you have on your site the harder it is for crawler to navigate. The internet is enormous and even though Google seems like they have unlimited resources even they have to be selective in what they crawl and what they don’t. If two websites has the same quality of content but one is statically rendered and the other one relies a lot on Javascript navigations to get access to the content, Google will choose the one that is cheaper. The first version of Linen we simply rendered all the threads from a Slack community. We used NextJS to handle the server rendering for us.

    Pagination

    Although the first version worked fine, our users were asking that they want their non threaded conversations indexed. This ran in to the second hurdle of pagination. Most chat apps are designed with infinite scrolling in mind, because pagination can arbitrarily cut off conversations so a scroll up is a better user experience. However you run into issues around overlapping content for search engines. Ideally a classic number based pagination would work best for search engines. The solution we ended at was two separate pagination styles. One for search engines and one for users. For search engines we came up with a custom pagination style where every 50 messages were grouped together in to a page. For users we ended up with a cursor/time based pagination where we can randomly index into any message/thread based on a timestamp. We then dynamic render based on whether the browser agent is a bot or a user. The downside of this approach is that code maintenance wise is a lot more work since you have two different pagination styles.

    Performance

    As Linen kept growing we ran into a problem of our website wasn’t getting indexed fast enough. We were adding content faster than Google’s crawl budget. A crawl budget is the amount of compute and resource a search engine will dedicate to a specific website. That typically is calculated based on some secret Google algorithm. The way to increase the crawl budget is a long term process of improving the content, quality, reputation, and usability of the website. Most are difficult improve in the short term. The one thing we could control was how efficient we would be for optimizing the crawl budget. The crawl budget is less of crawling number of pages and more of how much compute resources a search engine will be willing to allocate to your site. So if we decrease the resources that it cost to crawl our website we could increase the number of pages crawled. Ultimate we went through and optimized our bundle size and shrunk it by half. (You can read more here. The result of this was that this increased the number of pages crawled by Google by 40% within 2 weeks.

    Conclusion

    Beyond these changes we have probably spent several hundred hours of engineering experimenting and optimizing Linen to be search engine friendly. The most frustrating part about getting something on search engines is the slow feedback cycle. We would try to change something and we won’t see results for weeks. If you want to see the code of how this all works come checkout [https://github.com/Linen-dev/linen.dev](https://github.com/Linen-dev/linen.dev) or hop in to this Linen community to say hello.
    3
  • Linen.dev: The 500KB Slack alternative
    k

    Kam

    04/26/2023, 6:58 PM
    Linen is an open source Google searchable open-source Slack/Discord alternative for communities. Our vision is to merge the advantages of a forum with the real-time interaction of chat. We launched the first version of Linen a few months ago. Our initial client bundle size of ~1MB gzipped was already more streamlined than most chat apps. But we knew we could do better, so after a bunch of small optimizations, we shrunk it down to ~2MB parsed and ~500KB gzipped. We wanted to share the techniques we used and the process we went through.

    Why Focus on Reducing Bundle Size?

    Linen has millions of pages, and search engines have a specific budget on how much compute resources they allocate to crawling a website. By reducing the bundle size by half, we can potentially double the number of pages that search engines can crawl with all things being equal. Beyond search engine friendliness, a smaller bundle size also means faster initial page load speed, especially on slower connections. As a bonus side effect, it makes us more disciplined when choosing packages and libraries, which helps reduce the security attack surface area.

    What does these numbers mean:

    To identify areas for improvement, we used the webpack bundle analyzer to target our app/web directory, allowing us to visually pinpoint the space-consuming components.

    https://static.main.linendev.com/attachments/5b12e431-c19b-426d-a9a3-0df889e1ac22/Screen_Shot_2023-04-26_at_12.02.57_PM.jpg▾

    This returned three numbers. Stat: 7.27 MB Parsed: 3.52MB Gzipped: 984.32KB
    • Stat is what webpack returns the raw amount of packages and files that is in your folder this is the least important of the three since it isn't what the client ultimately sees.
    • Parsed is what gets ran on the client side after the bundler minimizes/uglifies your package this is how much space it takes up on the client's machine. We want to reduce this number but not the highest priority.
    • Gzipped is what gets sent over the internet and downloaded this will impact download and page load speed the most. For us we cared the most about gzipped size since it will impact bandwidth the most.
    For a more detailed look we've made our January bundle size public here

    Our Optimization Strategies:

    Fixing import issues and tree shaking problems We saw that React Icon utilized ~200KB parsed and ~50KB gzipped, even though we were only using a few SVGs from the library. Most of the time if you are only selectively importing specific functions your bundler will remove dead code through tree-shaking. We found that react-icons had an issue that lead to everything being imported. This meant that we were including every single react-icon in our package whether we need it or not. The work around was to use change the ipmorts to react-icons/all-files instead of react-icons, reducing the bundle size to 10KB parsed and 2KB gzipped. Replace libraries with custom code HeadlessUI didn't support individual importing. Although the package is small we were only using one component. We decided to replace the library with own custom written component We also noticed that we were only using AWS client for s3 upload on the client side and it was taking up significantly more bundle size we need so we replaced the entire client side package with a 2 api calls to the AWS api. Finally we were using _loadash's Union method. That was a bit heavier function which handled a lot more edge cases we needed so we replaced it all with our own custom function. All in all this strategic move saved us ~600KB parsed and ~100KB gzipped. It can be a bit time consuming but it helps you become more disciplined about which packages to add Moving code to the backend Finally, we saw that highlight.js (the library for code highlighting) was taking up 30% of our bundle size (~949.06KB parsed, ~292.58KB gzipped). Since most of our users are technical communities, it was obviously required. There was no way we could write our own, and it didn't seem like there was a lighter alternative since the ability to parse programming languages is very complicated. We ended up moving the code highlight code to a backend api that would cache the results. This meant that our bundles wouldn't need to load entire highlightjs.

    Conclusion

    Ultimately we were able to drop our bundle size down by 50% which meant faster load time and better user experience and SEO performance. We focused mainly on the low hanging fruits which ended up taking us a few days of engineering effort. All in all we were happy with the investment we made and took less time than we expected. This is what our bundle size looked like at the end (April of 2023) If you see something that you think we can reduce feel free to open a Github issue or pull request: in our repo
    7
    m
    • 2
    • 1
  • Linen 1.0 release 🎉
    k

    Kam

    01/23/2023, 5:15 PM
    Hi Everyone, Kam here! We open sourced and launched an early beta version of Linen back in October. The launch was beyond what we were expecting, we saw hundreds of community sign up that lead to millions of impressions on search engines made content that would have been hidden if they were only on Slack or Discord. Today we are launching Linen 1.0

    What is Linen?

    Linen is a chat tool that is both real time and search engine friendly. Most chat tools are designed for closed communities where the content exists inside of their platform. This is great for these companies because their data stays in house but not great for the communities that have relevant content that can help the community at large. We also support importing entire Slack/Discord community conversations in to Linen to make them Google searchable.

    What’s new?

    We add a lot of quality of life features that were missing from our beta launch.
    • Drag and drop to move/merge messages and threads
    • Github sign in option
    • Notification improvements - Read/unread status
    • Email notification when someone doesn’t see the reply
    • Markdown improvements - inline image, headings, lists
    • Expand images on click
    • Improve scrolling and UI/UX around navigation
    • Multi-community navigation side bar
    • Open/close state for managing threads
    • Community navigation side bar
    • Discord Forum support

    What is next?

    Beyond making community searchable we want to improve how communities scale. After a certain number of people in a community the noise starts becoming overwhelming and we want to build better tooling for that. We want to make it easier to have async conversations and a notification system that doesn’t stress you out. We think there is a much better product and tool that can be built.
    5
    • 1
    • 1
  • Adding Elixir to our Nextjs app
    k

    Kam

    11/28/2022, 7:36 PM

    Adding Elixir to our Nextjs app

    Table of content
    1. Context: What is Linen.dev
    2. Why did we choose Elixir
    3. What is the architecture
    4. What are trade offs that we had made

    Linen.dev and Context

    Linen.dev is an open source Slack alternative for communities. We started out as a tool to sync Slack and Discord conversations to search-engine friendly website. We are now building a full fledged Slack alternative for communities. One core part of Linen is the real time chat. We started out building Linen with Nextjs mainly to take advantage of the server rendering functionality but because vercel hosting doesn’t have long running jobs we couldn’t use nextjs to setup WebSockets. Because of these constraints we had to find another solution for the WebSockets.

    Why did we choose Elixir

    After some preliminary research we narrowed down our choose to 3 options:
    1. A hosted websocket service like Pusher
    2. A websocket service written in Nodejs with Socket.io
    3. A websocket service written in Elixir with Phoenix
    We wanted to open source Linen and eventually make it easy for developers to self host and didn't want to rely on a third party service. Which was why we didn't end up eliminating a service like pusher. Eventually it came down a decision between Socket.io and Elixir with Phoenix. Originally we were leaning towards Socket.io mainly because we could keep the same stack as the rest of the app. But after some more research we heard a bunch of negative feedback about Socket.io and one of our team members had a poor experience using Socket.io. Prior to working on Linen I built and maintained a fairly popular Elixir chat app called Papercups. I originally didn't intend Linen to be a real time chat app which was why I used Nextjs and node. Typically I would have stuck with node but our experience with WebSockets during our time working on the chat app was surprisingly smooth. We didn't have any issues with scaling and the performance was great. After some thought we decided to go with Elixir and Phoenix for the websocket service.

    Architecture

    After doing some research we came up with the following architecture:

    https://user-images.githubusercontent.com/4218509/200422983-21079c4a-bcb6-4f48-bf82-fdf9d01f1d3f.png▾

    There were a two core decisions for the first version:
    1. All write database interactions were happening with our existing node service.
    2. Elixir was only responsible for maintaining a websocket and pushing real time notifications to the client.
    This lead to our Elixir service being very simple and lightweight. The only thing it was responsible for was broadcasting events to the client side given the proper channels. By keeping the scope of the Elixir service limited we didn’t need to duplicate a bunch of our JS code and rewrite it in Elixir. By design Elixir processes(Figure out right word) are very fault tolerant and a single failure will not cause failures in other services. One lesson we learned from our previous project was not to handle inserting data over phoneix channels/WebSockets. Sockets could disconnect which could cause messages to be dropped and security implications meant that Elixir had to understand a lot more of the scope and permissioning logic. In total we have under 200 lines of custom Elixir code.

    Message sending flow :

    1. User logins and authenticates and connects to websocket and joins to the proper channels. Here is the full code example:
    https://github.com/Linen-dev/linen.dev/blob/a3f74e80b05e6443ba44bd84a6c7af8017becd3f/nextjs/hooks/WebSockets/index.tsx#L13:
    jsx Copy code
    jsx
    function useWebSockets({ room, token, permissions, onNewMessage }: Props) {
      const [channel, setChannel] = useState<Channel>();
      const [connected, setConnected] = useState<boolean>(false);
      useEffect(() => {
        if (permissions.chat && token && room) {
          //Set url instead of hard coding
          const socket = new Socket(
            `${process.env.NEXT_PUBLIC_PUSH_SERVICE_URL}/socket`,
            { params: { token } }
          );
    
          socket.connect();
          const channel = socket.channel(room);
    
          setChannel(channel);
          channel
            .join()
            .receive('ok', () => {
              setConnected(true);
            })
            .receive('error', () => {
              setConnected(false);
            });
          channel.on('new_msg', onNewMessage);
    
          return () => {
            setConnected(false);
            socket.disconnect();
          };
        }
    
        return () => {};
      }, []);
    
      useEffect(() => {
        channel?.off('new_msg');
        channel?.on('new_msg', onNewMessage);
      }, [onNewMessage]);
    
      return { connected, channel };
    }
    1. User sends a message to the node backend
    1. Client side does optimistic update and renders the text instantly https://github.com/Linen-dev/linen.dev/blob/176659feee3c093ffd4fbec6541fa4d154ae9c45/nextjs/components/Pages/Channel/Content/sendMessageWrapper.tsx
    jsx Copy code
    jsx
    return fetch(`/api/messages/channel`, {
          method: 'POST',
          body: JSON.stringify({
            communityId,
            body: message,
            files,
            channelId,
            imitationId,
          }),
        });
      },
    1. Node backend saves message to Postgres DB
    https://github.com/Linen-dev/linen.dev/blob/176659feee3c093ffd4fbec6541fa4d154ae9c45/nextjs/pages/api/messages/channel.ts#L110
    jsx Copy code
    jsx
    const thread = await prisma.threads.create({
        data: {
          channel: { connect: { id: channelId } },
          sentAt: sentAt.getTime(),
          lastReplyAt: sentAt.getTime(),
          messageCount: 1,
          messages,
        } as Prisma.threadsCreateInput
    	...
      });
    1. Node backend sends the message that has been created to Elixir push service along with metadata of which channel it exists in
    https://github.com/Linen-dev/linen.dev/blob/176659feee3c093ffd4fbec6541fa4d154ae9c45/nextjs/services/push/index.ts#L32
    jsx Copy code
    jsx
    export const push = ({
      channelId,
      threadId,
      messageId,
      isThread,
      isReply,
      message,
      thread,
    }: PushType) => {
      return request.post(`${pushURL}/api/message`).send({
        channel_id: channelId,
        thread_id: threadId,
        message_id: messageId,
        is_thread: isThread,
        is_reply: isReply,
        message,
        thread,
        token,
      });
    };
    1. Elixir push service then pushes the message to all the users that have joined the channel
    https://github.com/Linen-dev/linen.dev/blob/main/push_service/lib/push_service_web/controllers/channel_controller.ex#L6
    elixir Copy code
    def create(conn, params) do
        %{
          "channel_id" => channel_id,
          "token" => token
        } = params
    
          PushServiceWeb.Endpoint.broadcast!("room:lobby:" <> channel_id, "new_msg", params)
      end
    We’re using Phoenix channels which handles the broadcast automatically see: https://hexdocs.pm/phoenix/channels.html for more information
    elixir Copy code
    def join("room:" <> community_id, _params, socket) do
        current_user = socket.assigns[:current_user]
            {:ok, assign(socket, :community_id, community_id)}
        end
      end

    Limitations and trade offs

    Going with Elixir there were a few downsides. Setting up deployment process was going to be annoying and we needed to make sure this separate service was secure. Finally this was something that wasn’t well documented and couldn’t find anyone that has attempted this so the architecture wasn’t clear.
    2
  • HOW-TO create a React UI Library
    e

    Emil Ajdyna

    11/21/2022, 2:29 PM

    Bundle React UI library with SCSS, CSS Modules, Typescript and Rollup

    CSS Modules can be very useful for managing styles in large codebases. They're sometimes used together with
    sass
    , which can help you split or abstract css even further. Here's how it'd would look like for a simple
    Card
    component.
    Copy code
    /* Card.tsx */
    
    import React from 'react'
    import styles from './index.module.scss'
    
    interface Props {
      children: React.ReactNode;
    }
    
    export default function Card ({ children }: Props) {
      return <div className={styles.card}>{children}</div>
    }

    Setup

    The UI library is going to have two main output files,
    dist/index.js
    and
    dist/index.css
    . You'll need to include the css file manually inside of your app. Configuration wise, we'll need to setup a build process that creates a bundle for us. Here's an example rollup configuration file.
    Copy code
    /* rollup.config.js */
    
    const resolve = require('@rollup/plugin-node-resolve');
    const commonjs = require('@rollup/plugin-commonjs');
    const typescript = require('@rollup/plugin-typescript');
    const postcss = require('rollup-plugin-postcss');
    const external = require('rollup-plugin-peer-deps-external');
    
    module.exports = {
      input: 'src/index.tsx',
      output: {
        file: 'dist/index.js',
        format: 'cjs',
      },
      plugins: [
        resolve(),
        postcss({
          extract: true,
          modules: true,
          use: ['sass'],
        }),
        commonjs(),
        typescript({ tsconfig: './tsconfig.json' }),
        external(),
      ],
    };
    It's using few dependencies:
    Copy code
    npm install --save-dev rollup
    npm install --save-dev @rollup/plugin-node-resolve
    npm install --save-dev @rollup/plugin-commonjs
    npm install --save-dev @rollup/plugin-typescript
    npm install --save-dev rollup-plugin-postcss
    npm install --save-dev rollup-plugin-peer-deps-external
    npm install --save-dev sass
    You should also install react.
    Copy code
    npm install react
    At this point, running
    npx rollup --config rollup.config.js
    would create a bundle that includes
    react
    . We shouldn't include react there, e.g. because if would significantly increase your app size if you're using a different react version or cause other unintended side effects. The solution for this is to move
    react
    to
    peerDependencies
    inside of
    package.json
    .
    npm
    does not offer a shortcut for installing peer dependencies yet (2022), so you need to do it manually.

    Examples

    You can find a real life example here in our repo. It also contains a basic
    jest
    test setup. https://github.com/Linen-dev/linen.dev/tree/main/packages/ui I hope this short article helps even a little :). In case of any questions, just ask them in the thread.
    👍 1
    4