Nvidia

2022 - 9 - 20

Post cover
Image courtesy of "NVIDIA Blog"

NVIDIA Delivers Quantum Leap in Performance, Introduces New Era ... (NVIDIA Blog)

NVIDIA today unveiled the GeForce RTX® 40 Series of GPUs, designed to deliver revolutionary performance for gamers and creators, led by its new flagship, ...

The RTX 4080 16GB has 9,728 CUDA cores and 16GB of high-speed Micron GDDR6X memory, and with DLSS 3 is 2x as fast in today’s games as the GeForce RTX 3080 Ti and more powerful than the GeForce RTX 3090 Ti at lower power. The RTX 4080 12GB has 7,680 CUDA cores and 12GB of Micron GDDR6X memory, and with DLSS 3 is faster than the RTX 3090 Ti, the previous-generation flagship GPU. In full ray-traced games, the RTX 4090 with DLSS 3 is up to 4x faster compared to last generation’s RTX 3090 Ti with DLSS 2. [NVIDIA Omniverse](https://www.nvidia.com/en-us/omniverse/)™ — included in the NVIDIA Studio suite of software — will soon add [NVIDIA RTX Remix](https://www.nvidia.com/en-us/geforce/news/rtx-remix-announcement/), a modding platform to create stunning RTX remasters of classic games. Portal with RTX will be released as free, official downloadable content for the classic platformer with RTX graphics in November, just in time for Portal’s 15th anniversary. The RTX 4090 is the world’s fastest gaming GPU with astonishing power, acoustics and temperature characteristics. The Micro-Mesh Engine provides the benefits of increased geometric complexity without the traditional performance and storage costs of complex geometries. For decades, rendering ray-traced scenes with physically correct lighting in real time has been considered the holy grail of graphics. The - Shader Execution Reordering (SER) that improves execution efficiency by rescheduling shading workloads on the fly to better utilize the GPU’s resources. It can overcome CPU performance limitations in games by allowing the GPU to generate entire frames independently. “Ada provides a quantum leap for gamers and paves the way for creators of fully simulated worlds.

Post cover
Image courtesy of "TechCrunch"

Nvidia debuts new products for robotics developers, including ... (TechCrunch)

At its fall 2022 GTC developer conference, Nvidia announced new products geared toward robotics developers, including a cloud-based Isaac Sim and the Jetson ...

Click here to find out more about our partners. Find out more about how we use your information in our Privacy Policy and Cookie Policy. You can select 'Manage settings' for more information and to manage your choices.

Post cover
Image courtesy of "Bloomberg"

Nvidia Puts AI at Center of Latest GeForce Graphics Card Upgrade (Bloomberg)

Nvidia Corp., the most valuable semiconductor maker in the US, unveiled a new type of graphics chip that uses enhanced artificial intelligence to create ...

The top-of-the-line RTX 4090 will cost $1,599 and go on sale Oct. Other versions that come in November will retail for $899 and $1,199. Codenamed Ada Lovelace, the new architecture underpins the company’s GeForce RTX 40 series of graphics cards, unveiled by co-founder and Chief Executive Officer Jensen Huang at an online event Tuesday.

Post cover
Image courtesy of "Reuters"

Nvidia unveils new gaming chip with AI features, taps TSMC for ... (Reuters)

Nvidia Corp on Tuesday announced new flagship chips for video gamers that use artificial intelligence (AI) to enhance graphics, saying it has tapped Taiwan ...

ban on selling Nvidia's top data center AI chips to China. The Lovelace chips have extended that technique to generate entire frames of a game using AI. Nvidia designs its chips but has them manufactured by partners. The flagship GeForce RTX 4090 model of the chip will sell for $1,599 and go on sale on Oct. Nvidia has gained attention in recent years with its booming data center business, which sells chips used in artificial intelligence work such as natural language processing. Register now for FREE unlimited access to Reuters.com

Post cover
Image courtesy of "TechCrunch"

Nvidia unveils Drive Thor, one chip to rule all software-defined ... (TechCrunch)

Nvidia revealed a next-generation automotive-grade chip that will unify a wide-range of in-car technology and go into production in 2025.

Click here to find out more about our partners. Find out more about how we use your information in our Privacy Policy and Cookie Policy. You can select 'Manage settings' for more information and to manage your choices.

Post cover
Image courtesy of "TrustedReviews"

Nvidia RTX 4080 vs Nvidia RTX 3080: Is newer better? (TrustedReviews)

Nvidia just announced its new line of GPUs with the Nvidia RTX 4000 Series, code-named Lovelace. Here's how it compares to its predecessor.

The company also revealed that full ray tracing will be coming to Looking back to the RTX 3080, which costs only $699 (12GB) / $649 (10GB), it is definitely the better option if you’re looking to upgrade without breaking the bank. While it’s expected that next-generation hardware can be more costly, the price of the latest RTX 4080 is a lot more expensive when compared to its predecessor. This means that gamers should be able to play supported DLSS games with even higher frame rates and more impressive graphics. DLSS is developed by Nvidia and uses AI to boost a game’s framerate performance higher, allowing you to play games at a higher frame rate without overloading your GPU. This is the company’s third generation of RTX graphics cards, with plenty more updates and improvements for gamers and creatives.

Post cover
Image courtesy of "TechCrunch"

Nvidia debuts new high-end RTX 4090 GPU after previous ... (TechCrunch)

The upgrade comes at an interesting time for PC users, who have been starved out of the GPU market by crypto miners for years, and now have their choice of ...

Click here to find out more about our partners. Find out more about how we use your information in our Privacy Policy and Cookie Policy. You can select 'Manage settings' for more information and to manage your choices.

Post cover
Image courtesy of "Forbes"

NVIDIA Launches Lovelace GPU, Cloud Services, Ships H100 ... (Forbes)

It's impossible to convey the excitement of a Jensen Huang keynote address at GTC, but here's what caught my eye.

We have no investment positions in any of the companies mentioned in this article and do not plan to initiate any in the near future. Disclosures: This article expresses the opinions of the author, and is not to be taken as advice to purchase from nor invest in the companies mentioned. Next up is a new cloud service that will enable far more professionals to interact and collaborate on the development and testing of digital twins using cloud services. Actually there are another dozen technologies that merit attention such as the addition of NVIDIA Clara to the Broad Institute Tera Cloud Platform. NVIDIA also announced several cloud services that will reduce barriers to adoption of the company’s extensive software portfolio. As it matures, the cloud-based 3D graphics generation will greatly expend the utility of Omniverse, the industry’s only metaverse platform that is targeting professional and creative communities. While at Cambrian-AI we tend to focus on the data center and AI at the edge, we would be remiss if we did not mention the star of the show, the Lovelace GPU. Seizing the opportunity, NVIDIA is positioning the Hopper GPU as a break-though in reducing the exorbitant costs of training these massive models. He indicated that NVIDIA DGX servers and HGX modules using SXM and NVLink will ship in Q1, 2023, and expects wide-scale public cloud support in the same timeframe. NVIDIA made a slew of technology and customer announcements at the Fall GTC this year. He also pointed out that the Grace-Hopper superchip will deliver 7X the fast-memory capacity (4.6TB) and 8000 TFLOPS versus today’s CPU-GPU configuration, critical for the recommender models used by eCommerce and media super-websites. As background, lest you think that LLM’s are a solution looking for a problem, just the generative model craze alone is staggering.

Post cover
Image courtesy of "The Register"

Nvidia sets out timeline for H100 GPUs – now for HGX, next year for ... (The Register)

GTC Nvidia's long-awaited Hopper H100 accelerators will begin shipping later next month in OEM-built HGX systems, the silicon giant said at its GPU ...

However, it it is unknown when we might get to see the tech in action. As it stands, all three of Nvidia's launch partners – Zeekr, Xpeng, and QCraft – are based in China. He said Drive Thor is designed to unify the litany of computer systems that power modern automobiles into a single centralized platform. The compute system is accompanied by three ConnectX-7 NICs, each capable of 400Gbps of throughput, and 16TB of NVMe storage. The tech also allows the chip to run multiple operating systems simultaneously to suit the various vehicle applications. The second-gen systems will be available from Lenovo, Supermicro, and Inspur from 2023. At its core, the system is essentially an expanded version of Nvidia's previously announced Jetson AGX Orin module, announced this On the topic of Orin, Nvidia also unveiled its Jetson Orin Nano compute modules. The company's second-gen visualization and digital-twinning systems instead come equipped with eight L40 GPUs. Nvidia's Jetson Orin Nano modules are available in January starting at $199. IGX Orin developer kits are slated to ship early next year with production systems available from ADLink, Advantech, Dedicated Computing, Kontron, MBX, and Onyx to name a handful. However, those waiting to get their hands on Nvidia's DGX H100 systems will have to wait until sometime in Q1 next year.

Post cover
Image courtesy of "TrustedReviews"

Lower power consumption is a massive win for the Nvidia RTX 4080 (TrustedReviews)

OPINION: A wise man once said “with great power comes more expensive energy bills”, but that's the consequence you have to accept when upgrading your ...

Meanwhile, the RTX 4080 (12GB) flaunts a low graphics card power of 285W, taking the recommended system power down to 700W. So Nvidia still has a long way to go if it wants to win over the approval of energy bill expert Martin Lewis or eco-warrior Greta Thunberg. [Nvidia RTX 2080](https://www.trustedreviews.com/reviews/nvidia-rtx-2080) has a 225W graphics card power, with Nvidia recommending a 650W minimum system power. The But the fact Nvidia has been able to do all this while also cutting down on power consumption is unbelievable. I can understand this tactic, as power efficiency simply isn’t as interesting to the average gamer as breakthrough advancements in artificial intelligence and light-rendering techniques.

Post cover
Image courtesy of "Reuters"

Chipmaker Nvidia launches new system for autonomous driving (Reuters)

Chip giant Nvidia Corp on Tuesday unveiled its new computing platform called DRIVE Thor that would centralize autonomous and assisted driving as well as ...

"There's a lot of companies doing great work, doing things that will benefit mankind and we want to support them," Shapiro said. ban on exports of two top Nvidia computing chips for data centers to China. [read more](/business/autos-transportation/upset-by-high-prices-gms-cruise-develops-its-own-chips-self-driving-cars-2022-09-14/) Register now for FREE unlimited access to Reuters.com [(GM.N)](https://www.reuters.com/companies/GM.N) autonomous driving unit Cruise last week said it had developed its own chips to be deployed by 2025.

Post cover
Image courtesy of "Forbes"

Nvidia Cancels Atlan Chip For AVs, Launches Thor With Double ... (Forbes)

At the fall 2022 Nvidia CEO Jensen Huang announced the cancelation of the Atlan automated driving chip announced in 2021 and the introduction of Thor with ...

For comparison, the Parker SoC that powered version 2 of Tesla AutoPilot (in combination with a Pascal GPU) from 2016 delivered about 1 TOPS and was followed in 2020 by the Xavier chip with 30 TOPs. When it was announced, Atlan promised the highest performance of any automotive SoC to date with up to 1,000 trillion operations per second (TOPS) of integer computing capability. At this week’s fall 2022 GTC, Huang announced that Atlan had been canceled and replaced with a new design dubbed Thor that will offer twice the performance and data throughput, still arriving in 2025.

Post cover
Image courtesy of "TrustedReviews"

Nvidia RTX 4090 vs Nvidia RTX 4080: Which Lovelace GPU is better? (TrustedReviews)

Nvidia recently revealed its latest batch of GPUs, code-named Lovelace. But how do the latest releases compare to each other?

The high-end RTX 4080 uses 113 RT-TFLOPS, while the RTX 4090 features 191 RT-TFLOPS. Meanwhile, the RTX 4090 has only one variation but it costs a lot more than its companions, with a starting price of $1599. The RTX 4080 comes in either 12GB or 16GB, with the latter costing more. The first configuration of the RTX 4080 comes with 12GB of memory, alongside 7,680 CUDA Cores, 639 Tensor-TFLOPs and 92 RT-TFLOPs. Looking at the specs of each GPU, it’s clear that RTX 4090 has a lot more power. Having more CUDA Cores means that the hardware can process more data parallelly.

Post cover
Image courtesy of "Eurogamer.net"

Nvidia announces RTX 4090 and 4080 graphics cards, DLSS 3 (Eurogamer.net)

Nvidia has announced its latest generation of GeForce RTX graphics cards at its GTC AI conference - and its new DLSS 3 algorithm, which combines for mad ...

and whether these new cards will actually be available at their announced prices if the performance claims are substantiated. It was an impressive presentation from Nvidia, and we're looking forward to testing both the performance and features of the new generation. As well as upgrading existing games with RTX and DLSS 3, Nvidia also announced a 'new' title: Portal RTX. Nvidia demonstrated Cyberpunk 2077 running at ~22fps at 4K with RT enabled and DLSS disabled, then ~100fps with RT enabled and DLSS 3 engaged - a massive speedup even if this is a cherry-picked demo. There are similar advancements in the dedicated ray tracing silicon, with a doubling of ray-triangle intersection throughput, a new opacity micromap engine that doubles the speed of 'ray tracing of alpha test geometry' and a micromesh engine that 'increases geometric richness without the BVH build and storage cost'. The new process allows the generation to be significantly more power-efficient too, although we expect the flagship cards to be as power-hungry as rumoured - you're just getting a ton of extra performance to offset that in efficiency terms.

Post cover
Image courtesy of "TechCrunch"

Nvidia launches new services for training large language models (TechCrunch)

Today at the company's fall 2022 GTC conference, Nvidia announced the NeMo LLM Service and BioNeMo LLM Service, which ostensibly make it easier to adapt LLMs ...

Click here to find out more about our partners. Find out more about how we use your information in our Privacy Policy and Cookie Policy. You can select 'Manage settings' for more information and to manage your choices.

Post cover
Image courtesy of "HPCwire"

NeMo LLM Service: Nvidia's First Cloud Service Makes AI Less Vague (HPCwire)

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational.

The process is called P-tuning, which takes advantage of the new transformer cores in the Hopper GPU. The NeMo LLM is the latest addition to a stable of software machines deployed in Nvidia’s AI factory. In the case of genomics and protein sequencing, the known structures and the behaviors and patterns is the data set that we have,” Khariya said. At the end of the learning cycle, based on the input, the main pre-trained model doesn’t change, but a prompt token is issued, which provides the context. “Transformers can rein in the more distinct relationships and that’s important for a whole class of problems. Nvidia will serve the model, but will also continue to iterate and co-develop the models with the consortium. And that token gives the model the context it needs to answer that question more accurately,” Kharya said. The LLM will help models answer questions in a language best suited to a specific domain. The model was originally developed by Meta (Facebook’s parent company), and was retrained by Nvidia and is now being offered as a service. The OpenFold Consortium, which includes academics, startups and companies in biotechnology and pharmaceutical sectors, developed the open-source protein language model. The output is a cloud-based API for users to interact with the service or use in applications. Nvidia is also kicking off the NeMo LLM cloud service with BioNeMo, which provides researchers access to pre-trained chemistry and biology language models.

Post cover
Image courtesy of "Pocket-lint.com"

Nvidia unleashes GeForce RTX 4080 and 4090 graphics cards (Pocket-lint.com)

The next generation of PC graphics hardware is finally here, and it's looking mighty impressive.

The Nvidia GeForce RTX 4090 will be available on October 12th at a price of $1599. The RTX 4080 brings two to four times the performance of the RTX 3080 Ti, with that four times score also coming from Nvidia's own Racer X demo. First up was the flagship card, the RTX 4090, which comes with a whopping 24GB of GDDR6X memory and promises to be two to four times faster than the reigning champ, the RTX 3090 Ti.

Nvidia, Broad Institute Team on Deep Learning, Natural Language ... (GenomeWeb)

At its annual GTC meeting, computing hardware maker Nvidia unveiled several new technologies and partnerships in bioinformatics.

"The next frontier for imaging is contributing to the innovations in minimally invasive surgery and robotic surgery," she said. In the future, researchers will be able to customize LLM models, the company said. "LLMs give us a new tool to explore the infinite world of biomolecules and chemistry." Anthony Philippakis, the Broad's chief data officer, said that LLMs help researchers and clinicians make sense of human language in medical records. "Similarly, in biology, there's another set of languages, the language of DNA, RNA, and proteins," he said. "This partnership with Nvidia will create greater access to different types of analysis and bring that to a wider group of people who wouldn't necessarily have access to those sophisticated technology services.

Post cover
Image courtesy of "Kotaku"

Nvidia's New 4000-Series PC Graphics Cards Are Too Damn ... (Kotaku)

Are graphics cards like the just-revealed RTX 4090 and RTX 4080s becoming unaffordable?

I hope there is some sort of relief on the horizon, because as the one Redditor put it, “I love PC gaming, but I can’t fucking afford to be a part of it anymore.” The RTX 4080 16GB is 3x the performance of the RTX 3080 Ti on next-gen content like Cyberpunk with RT Overdrive mode or Racer RTX—for the same price of $1199. But viewing events from the consumer side, it really feels like the costs of enthusiast PC gaming are continuing to skyrocket, and at a time when the costs of just about everything else are, too. The price point of RTX 4090 starts at $1599. Of course, even then its MSRP is $899, which is $400 more than the RTX 3070’s original MSRP of $499. Now Nvidia’s revealed a 16GB RTX 4080, which many observers take to be the closest to a true 3080 successor, for a whopping $1199—an increase of $500. They are trying to sell you a 4070 rebranded as a 4080 for 900$ lmao.” [One commenter looked back](https://old.reddit.com/r/hardware/comments/xjbobv/geforce_rtx_4090_revealed_releasing_in_october/ip7pdmc/) to 2018’s GeForce 10-series to pinpoint why today’s prices felt so exorbitant. [a ray-traced version of Portal](https://www.nvidia.com/en-us/geforce/news/portal-with-rtx-ray-tracing/). For example, the RTX 2070 cost almost as much as the prior high-end GTX 1080, despite being less of a flagship card. With the 20 series, they bumped all of the prices a whole fucking tier, and it looks like they are doing it again. Indeed, in 2018, Nvidia attracted criticism for pricing its then-new RTX 20-series cards a full “tier” higher than the previous 10-series cards had cost. Today, after many months of leaks, rumors, and speculation, Nvidia finally officially revealed its next generation of graphics cards, the RTX 4000 series.

Post cover
Image courtesy of "Data Center Knowledge"

NVIDIA's Omniverse Lets You Create Digital Twins of Your Data ... (Data Center Knowledge)

Nvidia promises to make 3D modeling and digital technology more accessible for your data center, just not at the moment.

We in the data center industry know what that means, more demand for access to storage, network, and computer. Systems to support the chips are coming in the first half of 2023. The implementation of robotics to maintain data center equipment would get a boost from NVIDIA’s Omniverse, based on case studies from other verticals such as the automotive and railway industries. We admit, many of the firm’s announcements have a decided cool factor, leveraging the power of 3D for realistic simulations, but is there anything here that will change your life as a data center pro today or in the very near future? We're seeing a trend here in that NVIDIA wants simulation technology to be readily available and as plug-and-play as possible for enterprises. Our friends at Siemen’s provided us with access to a [webinar on digital twin technology](https://new.siemens.com/global/en/markets/data-centers/events-webinars/webinar-digital-twin-applications-for-data-centers-apac-emea.html).

Post cover
Image courtesy of "HPCwire"

Nvidia Introduces New Ada Lovelace Architecture, OVX Systems ... (HPCwire)

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded ...

There is also Omniverse Replicator, a 3D synthetic data generator for researchers, developers, and enterprises that integrates with Nvidia’s AI cloud services. Omniverse Cloud will also be available as Nvidia managed services via early access by application. “Using this technology to generate large volumes of high-fidelity, physically accurate scenarios in a scalable, cost-efficient manner will accelerate our progress towards our goal of a future with zero accidents and less congestion.” “Planning our factories of the future starts with building state-of-the-art digital twins using Nvidia Omniverse,” said Jürgen Wittmann, head of innovation and virtual production at BMW Group. With Omniverse Cloud, users can collaborate on 3D workflows without the need for local compute power. “In the case of OVX, we do optimize it for digital twins from a sizing standpoint, but I want to be clear that it can be virtualized. Nvidia said that the RTX 6000 would be available in a couple of months from channel partners with wider availability from OEMs late this year into early next year to align with developments elsewhere in the industry. The second generation OVX system features an updated GPU architecture and enhanced networking technology. Ada Lovelace is not a subset of Nvidia’s Hopper GPU architecture (announced just six months prior), nor is it truly a successor — instead, Ada Lovelace is to graphics workloads as Hopper is to AI and HPC workloads. “With a massive 48GB frame buffer, OVX, with eight L40s, will be able to process giant Omniverse virtual world simulations.” The company also announced two GPUs based on the Ada Lovelace architecture — the workstation-focused RTX 6000 and the datacenter-focused L40 — along with the Omniverse-focused, L40-powered, second-generation OVX system. In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer.

Post cover
Image courtesy of "The Register"

Nvidia unveils RTX 4090 – but it's the 4080 to watch out for (The Register)

Nvidia CEO Jensen Huang unveiled his GPU giant's flagship RTX 40-series graphics cards at GTC today. Powered by Nv's Ada Lovelace microarchitecture and a ...

While the cards use the same GDDR6X memory as their predecessors, Nvidia has cut the memory interface down to 256 bits on the 16GB model and 192 bits on the 12GB version. Both the 12GB and 16GB 4080s will launch in November at a suggested price of $899 and $1,199 respectively. As with previous years, Nvidia is sticking with its flagship cards out of the gate. While little is known about AMD’s upcoming consumer cards beyond a few seconds of gameplay demoed during the Zen titan's Ryzen 7000 desktop launch last month, we do know it’ll be based on a 5nm TSMC manufacturing process. The gap grows even larger in the case of the 4080 12GB variant, which features 7,680 CUDA cores, roughly 2,000 fewer than its predecessor. That’s roughly 9-19 percent less power than the 3080’s 350W TDP. Kicking things off with the RTX 4090, the unit bears more than a passing resemblance to 3090 TI Huang was comparing it to during his keynote presentation. Beyond that, the new card’s specs suggest it will deliver substantially higher performance, presumably thanks to architectural and process node improvements. Neoverse V2 CPU cores for Grace chip](https://www.theregister.com/2022/09/14/nvidias_grace_arm_neoverse_v2/) And that’s not the only way Nvidia has seemingly kneecapped the 4080. With that said, we’re not holding our breath. Powered by Nv's Ada Lovelace microarchitecture and a TSMC 4nm process, the RTX 4090 and 4080 are said to offer more than twice the performance of the previous 3090 TI and 3080 TI flagships.

Post cover
Image courtesy of "CNBC"

Nvidia CEO tries to soothe investor angst over gaming as new ... (CNBC)

The CNBC Investing Club gives investors a behind-the-scenes look at how Jim Cramer manages an investment portfolio so you can manage your own money and ...

"The actions we're taking right now to clear the inventory in the channel, to normalize inventory in the channel, is a good action. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. "Coming into the year, the whole market was really, really vibrant and was super high, and the supply chain was super long, so we had a lot of inventory in the pipeline," Huang said. Huang estimated that, in total, these corrective actions should span about two-and-a-half quarters, meaning the impact would be felt in "a little bit of Q4." "Of course, that resulted in Q2 and Q3 being a lot lower than we originally anticipated, but the overall gaming market remains solid," he added. "The world's gaming market continues to be vibrant, and we have absolutely no doubt that when Ada gets into the marketplace there's going to be lots of excited gamers waiting for it," Huang said.

Post cover
Image courtesy of "Electronics Weekly"

Nvidia announces new GPUs (Electronics Weekly)

Nvidia's Lovelace architecture 40-Series GPUs have been announced. The RTX 4090 graphics card is on sale from October 12th for $1599, and the RTX 4080 is.

RTX 4090 specs. The RTX 4090 ships with 24GB of GDDR6X memory.

Post cover
Image courtesy of "Genetic Engineering & Biotechnology News"

Genomic Sequencing Analysis Gets Boost through Nvidia, Broad ... (Genetic Engineering & Biotechnology News)

The Broad Institute and Nvidia are partnering to accelerate genome analysis and develop large language models for the development of targeted therapies.

It’s easy to understand why the Broad wants to access the power that Nvidia’s GPUs offer. Nvidia has been, according to Kimberly Powell, vice president of healthcare at Nvidia, “working on accelerated computing tools for the last three years.” This program, she noted, runs on a multi-cloud platform so that the entire Terra platform can take advantage of it. And, this requires a new generation of hardware acceleration, to process data cheaper, faster, and better. It’s a “point and click” way to analyze genomes, noted Keith Robison, PhD, genomics expert and author of the omicsomics blog. On top of that, it is easy to use and does not require the same bioinformatics background that GATK does. And, as the instruments produce more data, the computing platforms have to rise to the occasion as well.

Post cover
Image courtesy of "NME.com"

Nvidia reveals “revolutionary” RTX 4090 graphics card (NME.com)

Nvidia has revealed details of its most advanced graphics card, the RTX 4090 which is the first to use its Ada Lovelace architecture.

“Powered by the new fourth-gen tensor cores and optical flow accelerator on GeForce RTX 40 Series GPUs, DLSS 3 uses AI to create additional high-quality frames,” outlines Nvidia. According to Nvidia, the GeForce 4090 is “the ultimate GeForce GPU. It comes after [the platform introduced anti-gambling measures](https://www.nme.com/news/gaming-news/twitch-introduces-anti-gambling-restrictions-some-creators-want-more-done-3018870) that aim to tackle creator content with restrictions to gambling promotion last month. Set for release on October 12, 2022, the RTX 4090 is powered by Nvidia’s own Ada Lovelace architecture which has been designed to “provide revolutionary performance for ray tracing and AI-based neural graphics. “Ada provides a quantum leap for gamers and paves the way for creators of fully simulated worlds. It brings an enormous leap in performance, efficiency, and AI-powered graphics.

Post cover
Image courtesy of "TrustedReviews"

Nvidia RTX 4080 vs Nvidia RTX 3090: Is newer better? (TrustedReviews)

Nvidia recently announced the RTX 4000 Series of graphics cards, calling into question whether its predecessors are still worth buying.

However, the latest RTX 4080 comes with improved architecture, boasting a 4nm node instead of Ampere’s 8nm node. The RTX 4080 12GB has 7,680 CUDA Cores, while the 16GB model comes with 9,728 CUDA Cores. However, thanks to the improved architecture and new updated software, it looks like it will be a close contest. However, the RTX 3090 comes with a staggering 24GB of memory, making it the superior option if memory is a priority for you. And looking at the graph below, we can see that Ada Lovelace offers massive performance boosts over Ampere. It also brings in the latest [Lovelace](https://www.trustedreviews.com/explainer/what-is-nvidia-lovelace-4268040) architecture, alongside new features exclusive to the RTX 4000 Series, such as 3rd-generation ray tracing and [DLSS](https://www.trustedreviews.com/explainer/what-is-dlss-4110546) 3. However, we can use the specs provided by Nvidia to get an idea of which graphics card will come out on top. CUDA Cores are developed by Nvidia and are designed to take on multiple calculations at the same time, allowing for speedy and efficient parallel computing. With all these new features and new architecture, is the RTX 3000 Series still worth it? The RTX 4080 will be available to purchase at some point in November this year, with no specific dates being singled out yet. More transistors generally result in a faster performance, as the data can be transferred and processed at a faster rate. More cores mean that more calculations can be done at once, which is important for anyone looking to play graphically demanding games.

Post cover
Image courtesy of "AEC Magazine"

Nvidia RTX 6000 'Ada Lovelace' GPU launches (AEC Magazine)

Nvidia RTX 6000 'Ada Lovelace' workstation GPU promises to boost performance by changing the way viewports and scenes are rendered.

However, Nvidia also dedicated some time to engineering simulation, specifically the use of Ansys software, including Ansys Discovery and Ansys Fluent for Computational Fluid Dynamics (CFD). With Shader Execution Reordering (SER), the Nvidia RTX 6000 dynamically reorganises its workload, so similar shaders are processed together. Nvidia DLSS has been around for several years and with the new ‘Ada Lovelace’ Nvidia RTX 6000, is now on its third generation. It processes the new frame, and the prior frame, to discover how the scene is changing, then generates entirely new frames without having to process the graphics pipeline. The Nvidia RTX 6000 is a dual slot graphics card with 48 GB of GDDR6 memory (with error-correcting code (ECC)), a max power consumption of 300 W and support for PCIe Gen 4, giving it full compatibility with workstations featuring the latest Intel and AMD CPUs. It is not to be confused with 2018’s Turing-based [Nvidia Quadro RTX 6000](https://aecmag.com/features/nvidia-takes-giant-leap-with-real-time-ray-tracing/).

Explore the last week