Quantcast
Jump to content


Using Node Modules in Deno


Recommended Posts

2020-08-04-01-banner-internet.jpg

A bad practice but sometimes there is no alternative.

Last time we introduced about Deno and discussed how it compares to node, like node, Deno is a server side code-execution environment based on web technology.

  • Node uses JavaScript with commonjs modules and npm/yarn as it’s package manager.

  • Deno uses Typescript or JavaScript with modern javascript import statements. It does not need a package manager.

To import a module as usual in deno you reference it by URL:

import { serve } from "https://deno.land/std/http/server.ts"; 

You can find many of the modules you may need in the Deno standard library or in the Deno third party modules list but they don’t have everything.

Sometimes you need to use a module which the maintainers have only made available through the npm ecosystem. Here are some methods from most convenient to least:

1. If the module already uses ES modules import/export syntax.

The libraries you use from deno don’t have to come from the recommended Deno packages they can come from any URL, provided they use the modern import syntax. Using unpkg is a great way to access these files directly from inside an npm repo.

import throttle from [https://unpkg.com/[email protected]/throttle.js](https://unpkg.com/[email protected]/throttle.js)

2. If the module itself doesn’t use imports but the source code does

If the module is compiled or in the wrong format though npm you may still have some luck if you take a look at the source code. Many popular libraries have moved away from using commonjs in their source code to the standards compliant es module import syntax.

Some packages have a separate src/ and dist/ directory where the esmodule style code is in src/ which isn’t included in the package available through npm. In that case you can import from the source directly.

import throttle from "https://raw.githubusercontent.com/lodash/lodash/master/throttle.js";

I got this URL by clicking on the “raw” button on github to get the raw JS file. It’s probably neater to use a github cdn or to see if the file is available through github pages, but this works.

NB: Some libraries use esmodules with webpack, or a module loader which lets them import from node modules like this:

Bad:

import { someFunction } from "modulename";

import { someOtherFunction } from "modulename/file.js"; 

The standard for imports is that they need to start with ./ or be a URL to work

Good:

import { someOtherFunction } from "./folder/file.js"; 

In that situation try the next method:

3. Importing commonjs modules

Fortunately there is a service called JSPM which will resolve the 3rd party modules and compile the commonjs modules to work as esmodule imports. This tool is for using node modules in the browser without a build step. But we can use it here too.

The JSPM logo
The JSPM logo

In my most recent project i wanted to do push notifications, which involves generating the credentials for VAPID, there is a deno crypto library which can do encryption but doing the full procedure is difficult and I’d rather use the popular web-push library. I can import it using the JSPM CDN using the URL like below:

import webPush from "https://dev.jspm.io/web-push"; 

I can now use it like any other module in deno.

This almost worked 100% some of the bits which relied on specific node behaviors such as making network requests failed in this situation I had to work around this to use the standardised fetch API deno uses.

Getting Typescript types working

One nice feature of typescipt, which deno uses, is that it provides really good autocomplete for modules. The deno extension for my editor even can autocomplete for third part modules if it knows the type definitions.

This isn’t essential to getting the code to work but can provide huge benefits for helping you maintain your code.

When I was importing another module called fast-xml-parser when I was looking through the source code I noticed it had a type definitions file which is a file which ends in .d.ts . These files describe the various interfaces and even work for even for JavaScript .js files. You can sometimes also find the type definitions files in the @types\somemodule repo.

DefinitelyTyped/DefinitelyTyped
The repository for high quality TypeScript type definitions. - DefinitelyTyped/DefinitelyTypedgithub.com

Using this file typescript can auto complete on things imported from JavaScript files. Even for files imported using JSPM:

// Import the fast-xml-parser library
import fastXMLParser from "https://dev.jspm.io/fast-xml-parser";

// Import the type definition file from the source code of fast-xml-parser import * as FastXMLParser from "https://raw.githubusercontent.com/NaturalIntelligence/fast-xml-parser/master/src/parser.d.ts";

*// Use the parser with the types
const* parser = fastXMLParser as typeof FastXMLParser; 

I import the type definitions from the definition files as FastXMLParser (note the uppercase F) this doesn’t contain any working code but is an object which has the same type as the code we want to import.

I import the code from JSPM as fastXMLParser (lowercase f) which is the working code but has no types.

Next I combine them together to make parser which is fastXMLParser with the type of FastXMLParser .

Thank you for reading, I hope you give deno a go. The ability to use any module made for the web and even some which were made for node/npm really gives this new server side library ecosystem a good foundation to get started from. 🦕

View the full blog at its source

Link to comment
Share on other sites



  • Replies 0
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

Popular Days

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Similar Topics

    • By Samsung Newsroom
      At GDC 2019, Arm and Samsung were joined on stage in the “All-in-One Guide to Vulkan on Mobile” talk to share their learning from helping numerous developers and studios in optimizing their Vulkan mobile games. In tandem, Arm released Vulkan Best Practices for Mobile Developers to address some of the most common challenges faced when coding Vulkan applications on mobile. It includes an expansive list of runnable samples with full source code available online.
      This blog series delves in detail into each sample, investigates individual Vulkan features, and demonstrates best practices of how to use them.
      Overview
      Setting up a Vulkan swapchain involves picking between options that don’t have a straightforward connection to performance. The default options might not be the most efficient ones, and what works best on a desktop may be different from what works on mobile.
      Looking at the VkSwapchainCreateInfoKHR struct, we identified three options that need a more detailed analysis:
      presentMode: what does each present mode imply in terms of performance? minImageCount: which is the best number of images? preTransform: what does it mean, and what do we need to do about it? This blog post covers the first two points, as they are both tied to the concept of buffering swapchain images. Surface transform is quite a complex topic that we’ll cover in a future post on the Arm community.
      Choosing a present mode
      Vulkan has several present modes, but mobile GPUs only support a subset of them. In general, presenting an image directly to the screen (immediate mode) is not supported.
      The application will render an image, then pass it to the presentation engine via vkQueuePresentKHR. The presentation engine will display the image for the next VSync cycle, and then it will make it available to the application again.
      The only present modes which support VSync are:
      FIFO: VK_PRESENT_MODE_FIFO_KHR MAILBOX: VK_PRESENT_MODE_MAILBOX_KHR We will now each of these in more detail to understand which one is better for mobile.

      Figure 1 shows an outline of how the FIFO present mode works. The presentation engine has a queue (or “FIFO”) of images, in this case, three of them. At each VSync signal, the image in front of the queue displays on screen and is then released. The application will acquire one of the available ones, draw to it and then hand it over to the presentation engine, which will push it to the back of the queue. You may be used to this behavior from other graphics APIs, double or triple buffering – more on that later!
      An interesting property of the FIFO present mode is that if the GPU can process images really fast, the queue can become full at some point. When this happens, the CPU and the GPU will idle until an image finishes its time on screen and is available again. The framerate will be capped at a stable 60 fps, corresponding to VSync.
      This idling behavior works well on mobile because it means that no unnecessary work is performed. The extra CPU and GPU budget will be detected by the DVFS (Dynamic Voltage and Frequency Scaling) system, which reduces their frequencies to save power at no performance cost. This limits overheating and saves battery life – even a small detail such as the present mode can have a significant impact on your users’ experience!
      Let us take a look at MAILBOX now. The main difference, as you can see from Figure 2 below, is that there is no queue anymore. The presentation engine will now hold a single image that will be presented at each VSync signal.

      The app can acquire a new image straight away, render to it, and present it. If an image is queued for presentation, it will be discarded. Mobile demands efficiency; hence, the word “discarded” should be a big red flag when developing on mobile – the aim should always be to avoid unnecessary work.
      Since an image was queued for presentation, the framerate will not improve. What is the advantage of MAILBOX then? Being able to keep submitting frames lets you ensure you have the latest user input, so input latency can be lower versus FIFO.
      The price you pay for MAILBOX can be very steep. If you don’t throttle your CPU and GPU at all, one of them may be fully utilized, resulting in higher power consumption. Unless you need low-input latency, our recommendation is to use FIFO.

      Choosing the number of images
      It is now clear that FIFO is the most efficient present mode for mobile, but what about minImageCount? In the context of FIFO, minImageCount differentiates between double and triple buffering, which can have an impact on performance.
      The number of images you ask for needs to be bound within the minimum and maximum images supported by the surface (you can query these values via surface capabilities). You will typically ask for 2 or 3 images, but the presentation engine can decide to allocate more.
      Let us start with double buffering. Figure 4 outlines the expected double-buffering behavior.

      Double buffering works well if frames can be processed within 16.6ms, which is the interval between VSync signals at a rate of 60 fps. The rendered image is presented to the swapchain, and the previously presented one is made available to the application again.
      What happens if the GPU cannot process frames within 16.6ms?

      Double buffering breaks! As you can see from Figure 5, if no images are ready when the VSync signal arrives, the only option for the presentation engine is to keep the current image on screen. The app has to wait for another whole VSync cycle before it acquires a new image, which effectively limits the framerate to 30 fps. A much higher rate could be achieved if the GPU could keep processing frames. This may be ok for you if you are happy to limit framerate to 30 fps, but if you’re aiming for 60 fps, you should consider triple buffering.
      Even if your app can achieve 60 fps most of the time, with double buffering the tiniest slowdown below 60 fps results in an immediate drop to 30 fps.

      Figure 6 shows triple buffering in action. Even if the GPU has not finished rendering when VSync arrives, a previous frame is queued for presentation. This means that the presentation engine can release the currently displayed image and the GPU can acquire it as soon as it is ready.
      In the example shown, triple buffering results in ~50 fps versus 30 fps with double buffering.
      The sample
      Our Vulkan Best Practice for Mobile Developers project on Github has a sample on swapchain images, that specifically compares double and triple buffering. You can check out the tutorial for the Swapchain Images sample.

       

      As you can see from Figures 7 and 8, triple buffering lets the app achieve a stable 60 fps (16.6 ms frame time), providing x2 higher frame rate. When switching to double buffering the framerate drops.
      We encourage you to check out the project on the Vulkan Mobile Best Practice GitHub page and try this or other samples for yourself! The sample code gives developers on-screen control to demonstrate multiple ways of using the feature. It also shows the performance impact of the different approaches through real-time hardware counters on the display. You are also warmly invited to contribute to the project by providing feedback and fixes and creating additional samples.
      Please also visit the Arm Community for more in-depth blogs on the other Vulkan samples.
      View the full blog at its source





×
×
  • Create New...