Quantcast
Jump to content

Sign in to follow this  
STF News

Vulkan Mobile Best Practice: How To Configure Your Vulkan Swapchain

Recommended Posts

At GDC 2019, Arm and Samsung were joined on stage in the “All-in-One Guide to Vulkan on Mobile” talk to share their learning from helping numerous developers and studios in optimizing their Vulkan mobile games. In tandem, Arm released Vulkan Best Practices for Mobile Developers to address some of the most common challenges faced when coding Vulkan applications on mobile. It includes an expansive list of runnable samples with full source code available online.

This blog series delves in detail into each sample, investigates individual Vulkan features, and demonstrates best practices of how to use them.

Overview

Setting up a Vulkan swapchain involves picking between options that don’t have a straightforward connection to performance. The default options might not be the most efficient ones, and what works best on a desktop may be different from what works on mobile.

Looking at the VkSwapchainCreateInfoKHR struct, we identified three options that need a more detailed analysis:

  • presentMode: what does each present mode imply in terms of performance?
  • minImageCount: which is the best number of images?
  • preTransform: what does it mean, and what do we need to do about it?

This blog post covers the first two points, as they are both tied to the concept of buffering swapchain images. Surface transform is quite a complex topic that we’ll cover in a future post on the Arm community.

Choosing a present mode

Vulkan has several present modes, but mobile GPUs only support a subset of them. In general, presenting an image directly to the screen (immediate mode) is not supported.

The application will render an image, then pass it to the presentation engine via vkQueuePresentKHR. The presentation engine will display the image for the next VSync cycle, and then it will make it available to the application again.

The only present modes which support VSync are:

  • FIFO: VK_PRESENT_MODE_FIFO_KHR
  • MAILBOX: VK_PRESENT_MODE_MAILBOX_KHR

We will now each of these in more detail to understand which one is better for mobile.

Screen-Shot-2019-07-26-at-3.09.02-PM.png

Figure 1 shows an outline of how the FIFO present mode works. The presentation engine has a queue (or “FIFO”) of images, in this case, three of them. At each VSync signal, the image in front of the queue displays on screen and is then released. The application will acquire one of the available ones, draw to it and then hand it over to the presentation engine, which will push it to the back of the queue. You may be used to this behavior from other graphics APIs, double or triple buffering – more on that later!

An interesting property of the FIFO present mode is that if the GPU can process images really fast, the queue can become full at some point. When this happens, the CPU and the GPU will idle until an image finishes its time on screen and is available again. The framerate will be capped at a stable 60 fps, corresponding to VSync.

This idling behavior works well on mobile because it means that no unnecessary work is performed. The extra CPU and GPU budget will be detected by the DVFS (Dynamic Voltage and Frequency Scaling) system, which reduces their frequencies to save power at no performance cost. This limits overheating and saves battery life – even a small detail such as the present mode can have a significant impact on your users’ experience!

Let us take a look at MAILBOX now. The main difference, as you can see from Figure 2 below, is that there is no queue anymore. The presentation engine will now hold a single image that will be presented at each VSync signal.

Screen-Shot-2019-07-26-at-3.09.16-PM.png

The app can acquire a new image straight away, render to it, and present it. If an image is queued for presentation, it will be discarded. Mobile demands efficiency; hence, the word “discarded” should be a big red flag when developing on mobile – the aim should always be to avoid unnecessary work.

Since an image was queued for presentation, the framerate will not improve. What is the advantage of MAILBOX then? Being able to keep submitting frames lets you ensure you have the latest user input, so input latency can be lower versus FIFO.

The price you pay for MAILBOX can be very steep. If you don’t throttle your CPU and GPU at all, one of them may be fully utilized, resulting in higher power consumption. Unless you need low-input latency, our recommendation is to use FIFO.

Screen-Shot-2019-07-26-at-3.09.30-PM.png

Choosing the number of images

It is now clear that FIFO is the most efficient present mode for mobile, but what about minImageCount? In the context of FIFO, minImageCount differentiates between double and triple buffering, which can have an impact on performance.

The number of images you ask for needs to be bound within the minimum and maximum images supported by the surface (you can query these values via surface capabilities). You will typically ask for 2 or 3 images, but the presentation engine can decide to allocate more.

Let us start with double buffering. Figure 4 outlines the expected double-buffering behavior.

Screen-Shot-2019-07-26-at-3.09.45-PM.png

Double buffering works well if frames can be processed within 16.6ms, which is the interval between VSync signals at a rate of 60 fps. The rendered image is presented to the swapchain, and the previously presented one is made available to the application again.

What happens if the GPU cannot process frames within 16.6ms?

Screen-Shot-2019-07-26-at-3.09.59-PM.png

Double buffering breaks! As you can see from Figure 5, if no images are ready when the VSync signal arrives, the only option for the presentation engine is to keep the current image on screen. The app has to wait for another whole VSync cycle before it acquires a new image, which effectively limits the framerate to 30 fps. A much higher rate could be achieved if the GPU could keep processing frames. This may be ok for you if you are happy to limit framerate to 30 fps, but if you’re aiming for 60 fps, you should consider triple buffering.

Even if your app can achieve 60 fps most of the time, with double buffering the tiniest slowdown below 60 fps results in an immediate drop to 30 fps.

Screen-Shot-2019-07-26-at-3.10.12-PM.png

Figure 6 shows triple buffering in action. Even if the GPU has not finished rendering when VSync arrives, a previous frame is queued for presentation. This means that the presentation engine can release the currently displayed image and the GPU can acquire it as soon as it is ready.

In the example shown, triple buffering results in ~50 fps versus 30 fps with double buffering.

The sample

Our Vulkan Best Practice for Mobile Developers project on Github has a sample on swapchain images, that specifically compares double and triple buffering. You can check out the tutorial for the Swapchain Images sample.

Screen-Shot-2019-07-26-at-3.10.38-PM.png

 

Screen-Shot-2019-07-26-at-3.10.52-PM.png

As you can see from Figures 7 and 8, triple buffering lets the app achieve a stable 60 fps (16.6 ms frame time), providing x2 higher frame rate. When switching to double buffering the framerate drops.

We encourage you to check out the project on the Vulkan Mobile Best Practice GitHub page and try this or other samples for yourself! The sample code gives developers on-screen control to demonstrate multiple ways of using the feature. It also shows the performance impact of the different approaches through real-time hardware counters on the display. You are also warmly invited to contribute to the project by providing feedback and fixes and creating additional samples.

Please also visit the Arm Community for more in-depth blogs on the other Vulkan samples.

View the full blog at its source

Share this post


Link to post
Share on other sites


Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Similar Topics

    • By STF News
      This is the third post in the series covering Samsung’s participation in the MIT’s Medicine Grand Hack. You can find the first post here, and the second post here.
      The Promentia team was the Grand Prize Winner in the Mental Health and Professional Burnout track. Promentia’s project was an Alzheimer’s Disease prevention app, also named Promentia, with the tagline “Stay You.”
      Promentia accepting their award
      The team’s main point is that people do not realize that Alzheimer’s Disease is a highly delayable disease. More and more people are finding out that they are at higher risk of Alzheimer’s Disease as the popularity of at home genome services continues to rise.

      Promentia demoing their app
      They created a mobile prototype deployed on a Samsung Galaxy Note 9, and showed how they could use Samsung’s technology to develop a way to digitally enable ‪Alzheimer’s disease prevention.‬‬
      By monitoring the results from blood tests for six specific biomarkers scientifically proven to correlate to the onset of Alzheimer’s Disease (hsCRP, Homocysteine, Fasting insulin, Hemoglobin A1C, Fasting glucose, and Vitamin D), sleep, and exercise, the app tailors personalized feedback.
      Personalized feedback provided by the app
      The full team
      This is the first product of its kind. Promentia’s goal was to make Alzheimer’s Disease prevention tactics more easily accessible to the general public, so you can Stay You.
      View the full blog at its source
    • By STF News
      In my previous post I covered Samsung’s partnership with the Massachusetts Institute of Technology (MIT) and the Department of Veterans Affairs to host and sponsor the MIT’s Medicine Grand Hack, to foster creativity to solve some of the problems in the health care system.
      In this post I want to highlight one of the teams that participated in the hack: Insomniax. With the clarity of their focus, and how they integrated Samsung’s wearable devices in their pitch, they were able to stand out during the weekend’s activities and win the Department of Veteran’s Affairs award.
      The Insomniax team participated in the mental health and professional burnout track, and created an application that uses biometrics and self-reported data to develop personalized recommendations for veterans with mental illnesses, who have poor sleep quality.
      Their prototype highlighted the importance of having an easily accessible device that helps you tracking your daily routine and, with the help of an Artificial Intelligence system, gives you relevant advice to improve your quality of life.
      Insomniax showcasing their app
      The concept of the application they were working on was a way to combine data already collected by the Samsung Watch’s multiple sensors (things like heart rate, sleep, stress) along with a nightly questionnaire on activities/pre-sleep habits. The combined collected data then would be sent to a server with a REST API, then the system would process the data and revise the user’s recorded medical history to provide personalized recommendations on how they can get better sleep.
      The Insomniax Team
      Their demo was developed as a web app using Tizen Studio, and it was deployed on the Galaxy Watches that were made available to the Hackathon’s attendees.
      These kind of apps can take advantage of the advanced features included in the Samsung Watches, like the Human Activity Monitor, that gives you access to and record human activity data from various sensors and recorders on the device.
      View the full blog at its source
    • By STF News
      Samsung Electronics is in partnership with the Massachusetts Institute of Technology (MIT) and the Department of Veterans Affairs, Veterans Health Administration (VHA) to foster creativity and innovation to solve health care big problems, with the help of modern technology. As part of this partnership, we recently hosted the MIT Hacking Medicine Grand Hack in Washington DC.
      Just like the Boston event held in June, this hackathon differed from other developer events in a few important ways:
      The interdisciplinary focus was obvious from the moment teams were formed, just after the keynote on Friday. Feasibility and business viability were integral to all pitches. But most importantly, there was no need to code and all projects were problem-centered work or covered real user needs.
      The three-day event started on August 2nd and was hosted in Samsung’s Solutions Center in Washington D.C. During the weekend students, engineers, designers and developers participated in one of the three tracks forming groups, and bringing their diverse views and backgrounds to offer new solutions to problems that affect the lives of millions of people in America.
      These were the three tracks that participants could be part of:
      Access to health care Mental health and professional burnout Rare and orphan diseases. Each track had a winner, and sponsors and partners also awarded teams that showed promising ideas.
      Samsung provided Galaxy Note9 phones and Galaxy Watches to allow teams experiment and find out the best way to integrate the different sensors and SDKs into their own projects. We saw some creative uses of Samsung technologies and genuine desire to improve the health care system leveraging modern technology.
      The Key Watch team won the Samsung Breakthroughs That Matter Award in the Rare and Orphan Diseases track. This team was distinguished not only for the clarity of their project, but also for their enthusiasm to incorporate Samsung technologies in their pitch.
      The Key Watch team with Christopher Balcik, Samsung Vice President of Federal Government Business
      During the weekend, the team developed a way to monitor response to medication to tackle misdiagnosis of Parkinson’s disease vs drug-induced parkinsonism (DIP), which is the second most common cause for parkinsonism.

      While displaying similar symptoms, a rare disease requires an entirely separate treatment process. Additionally, up to 15% of Parkinson’s disease patients have the rare variant DIP, so, Key Watch’s solution could improve the life of tens of thousands, just in the US.
      Their envisioned system would work like this:
      A patient having parkinsonism symptoms would come in to consult with the doctor on an actionable plan for their treatment. Because Parkinson’s Disease accounts for 85% parkinsonism cases, they are likely to be classified as having Parkinson’s disease (PD). They are given treatment for PD and monitored – if their symptoms get better, as detected by Key Watch’s platform, then that would support their diagnosis. However, if the patient actually has DIP, the PD medication would have little to no effect, and through Key Watch’s continuous monitoring tool doctors can quickly intervene and pivot the treatment protocol. After enrolling on the platform, Key Watch would have continuous feedback on the patients tracked symptoms over time, which enables the doctor to effectively adjust the drug dosage. Key Watch: The full team
      The Key Watch members looked into the different sensors that the Samsung Galaxy watches could provide them, and focused on the gyroscope and the accelerometer APIs, that would give them data from a patient’s movements, including tremors and slow movement. With enough data, and medical experience, the proposed system would be able to detect the source of a patient’s movement abnormalities.
      To learn more about how you can get access to the Samsung wearable devices sensors, visit the Samsung Developer Program and start creating your own apps now.
      Stay tuned for more information about this event in the coming weeks!
      View the full blog at its source
    • By Alex
      Samsung Electronics announced today that it will support the world’s first 8K HDR10+ content, partnering with major European streaming services. With this announcement, Samsung has established itself as the leader of the HDR (High Dynamic Range) specification industry. CHILI, The Explorers and MEGOGO – three key OTT (Over-the-top) service providers in Europe – will adopt 8K HDR10+ along with its support for 4K HDR10+.
      HDR10+ technology optimizes brightness and maximizes the contrast ratio, making bright areas brighter and dark areas darker. The feature is available on all UHD TV and 2019 8K TV models, including Samsung’s lineup of QLED TVs.
      “With HDR emerging as one of the most important technologies for ultra-high picture quality, our HDR10+ format enables every image to be accurately displayed on screen just as the creator intended,” said Hyogun Lee, Executive Vice President of the Visual Display Business at Samsung Electronics. “We will continue to strengthen our industry leadership through establishing partnerships with top streaming service providers and equipping our televisions with the technology needed to support the world’s first 8K HDR10+.”
      “Our main goal in this partnership with Samsung is to offer the highest-quality content services available to our customers,” said Victor Chekanov, CEO at MEGOGO. “We will maintain our initiative in the Russian OTT service market and plan to provide dozens of HDR10+ movies to Samsung Smart TV users starting late this summer.”
      Aside from the aforementioned OTT streaming services, several other industry-leading content partners are collaborating with Samsung. Rakuten TV, Deutsche Telekom’s Magenta TV and Videociety are all expected to adopt HDR10+ support for its respective VOD services between Q4 2019 and Q1 2020. Also, Molotov, the first French OTT streaming service to offer live and on-demand TV channels all in one place, is also considering to adopt HDR10+. The growing list of premiere content partners is a testament to the industry’s commitment to the best viewing experience possible for consumers.
      HDR10+ content availability continues to expand. Recent announcements from Universal Pictures Home Entertainment and Warner Bros. Home Entertainment debuting HDR10+ on UHD Blu-ray with “The Secret Life of Pets 2” and “Godzilla: King of the Monsters”, respectively, add to the UHD Blu-ray discs already released from Twentieth Century Fox and others.
      Since Samsung began the HDR10+ logo certification program with Panasonic and 20th Century Fox last year, 81 companies have joined the program, boosting the program’s influence in the industry. Samsung has also strengthened its effort with the new opening of an HDR10+ center in China last December, following Korea, Japan and the U.S. With TV manufacturers such as Hisense obtaining HDR10+ certification in China, the HDR10+ alliance is expected to expand even further.
      Along with HDR10+ certifications of existing TVs and smartphone products, Samsung is working to implement the program for its B2B products. For example, micro LED models such as The Wall Pro and The Wall Lux have been HDR10+ certified, and the company plans to expand the ecosystem to include Samsung’s line of LED products.
       
      For more information on Samsung’s HDR10+-supported products, please visit https://hdr10plus.org/
      Source: https://news.samsung.com/global/samsung-electronics-enables-worlds-first-8k-hdr10-plus-technology
    • By Alex
      How do you get HBO Go on Samsung Tizen Smart TVs? HBO Go is supported on most Samsung Tizen Smart TVs.
      To find out if HBO GO is available on your Samsung TV, go to Samsung TV: Compatible Devices with HBO GO and look for your Samsung TV model.
      If HBO GO is not available on your Samsung TV, you can use a streaming player (such as Roku or Apple TV), a game console, or stream HBO GO to your TV using Chromecast. 
      Go to Smart Hub and search for HBO Go
       
×
×
  • Create New...