The Office of the Privacy Commissioner of Canada discovered that Cadillac Fairview, a commercial real estate company, collected and analyzed five million shopper’s images without their knowledge or consent using an Anonymous Video Analytics (AVA) tool.
AMD has launched its RX 6000 series of graphics cards based on the revised RDNA architecture.
Three new cards were launched at its virtual launch presentation on Oct. 28: the Radeon RX 6800, Radeon RX 6800 XT, and the Radeon RX 6900 XT.
AMD RX 6900 XT
AMD RX 6800 XT
AMD RX 6800
2250MHz / 2015MHz
2250MHz / 2015MHz
2105MHz / 1815 MHz
2 x 8 pin
2 x 8 pin
2 x 8 pin
All three cards are based on the RDNA 2 architecture, an improved version of AMD’s RDNA architecture used by the Radeon RX 5000 series graphics cards. Compared to the first iteration, RDNA 2 offers 54 per cent higher performance per watt despite being built on the same 7nm node by TSMC. At 4K resolution, AMD claims that RDNA 2 offers roughly twice the performance of RDNA.
Debuting with RDNA 2 is AMD’s new Infinity Cache, a new slice of 128MB cache memory installed on the GPU that’s accessible by all cores. AMD claims that Infinity Cache acts as a “massive bandwidth amplifier” and raises the effective memory bandwidth to 1664GB/s, 2.4 times higher than the standard GDDR6 memory onboard. For reference, the Radeon Vega VII, which used stacks of high-bandwidth memory (HBM) and a 4096-bit memory bus, achieved a memory bandwidth of 1024GB/s.
AMD doubling down on boosting memory bandwidth could be seen as a way to keep the GPU running at its peak. As Anand Tech explained in its Nvidia Turing architecture deep dive, memory bandwidth does not improve at the same pace as semiconductor density. This was one of the reasons AMD opted to use the expensive HBM2 memory for its Vega VII graphics card.
Now back to the design at hand. With the first RDNA cards, AMD nearly doubled their graphics memory bandwidth (compared to its older Graphics Core Next cards) by using a wider 384-bit memory bus and the faster GDDR6X memory. But with RDNA 2, AMD reverted to using a narrower 256-bit wide bus and the slower GDDR6 memory. Therefore, AMD is likely banking on Infinity Cache to alleviate any bottleneck that could hamper the performance of its new Ray Accelerator, a new dedicated ray trace unit added to the RDNA 2’s compute unit.
Smart Access Memory is another memory enhancement that can increase performance. The caveat is that it can only be enabled by pairing an RX 6000 series graphics card with an AMD Ryzen 5000 series processor. Once enabled, the processor would have “full access” to the GPU memory. AMD said that when combined with the new Rage Mode one-click overclocking feature, gamers could squeeze out an extra two to 13 percent performance for free. During the presentation, AMD only mentioned Smart Access Memory with the 500 series chipset motherboards. It’s not clear whether the 400 series chipset motherboards would receive it as well.
During the presentation, AMD CEO Lisa Su said that a super-resolution feature is in the works to “give gamers an option for more performance when using ray tracing.” Su did not announce when it will be available.
The RX 6000 series graphics cards bring support for Microsoft’s DirectX 12 graphics API. It aims to increase visual fidelity and processing efficiency through four key technologies:
DirectX Raytracing: an advanced ray-tracing technique that produces more realistic lighting and shadows.
Variable rate shading: renders different parts of an image at different detail levels.
Mesh shaders: A new shader pipeline design that makes shaders more flexible and efficient.
Sampler feedback: A feature that better predicts what graphical resources should be loaded next.
By improving power efficiency using holistic design optimizations, AMD was able to limit the total board power of its RX 6000 series graphics cards to 300W. According to the presentation slides, AMD targeted a 650W power supply to be the baseline power requirement. That may be enough to run the RX 6800, but AMD recommends a 750W unit of the RX 6800XT and 850W for the RX 6900 XT.
For the most part, AMD’s carefully-crafted presentation strongly alludes that the Radeon RX 6000 series will deliver better value than its green competitor. For graphics cards, value isn’t a simple conversation of pricing; performance at the targeted resolution still plays into the equation.
First up is the Radeon RX 6800, the most affordable option at US$579. AMD compared the Radeon RX 6800 to the Nvidia RTX 2080 Ti, Nvidia’s former flagship card built using the Nvidia Turing architecture. AMD’s presentation slides showed a convincing victory for the RX 6800 at 1440p and a solid lead at 4K. Based on preliminary reviews by various outlets, the RTX 2080 Ti roughly equates to the performance of the Nvidia GeForce RTX 3070 launching on Oct. 29, so AMD’s victory here could very well translate to a win against Nvidia’s midrange contender. Do note, however, that the RTX 3070 is US$79 cheaper and only comes with 8GB of memory.
Next up is the Radeon RX 6800 XT at $649, which squared off against Nvidia’s current-gen GeForce RTX 3080 graphics card. Not only did it show some seriously competitive numbers against Nvidia’s product, but it did so at a lower power profile (which AMD poignantly noted by enclosing it in bold brackets). It also undercuts Nvidia’s card by $50, increasing the value proposition ever so slightly.
Saving the best for last, AMD pitted its top-shelf Radeon RX 6900XT against Nvidia’s flagship GeForce RTX 3090 graphics card. With both the Rage mode overclocking and Smart Access Memory enabled, the RX 6900 XT traded blows against Nvidia’s best at 4K resolution. But at $999, the RX 6900 XT significantly undercuts the RTX 3090 Founders Edition by $499.
The Radeon RX 6800 and Radeon RX 6800 XT will be available globally on Nov. 18. The Radeon RX 6900 will be available on Dec. 8. AMD has promised that it would take every measure it can to prevent scalper bots from instantly draining inventory at launch–a nightmare scenario that struck Nvidia when it launched its GeForce RTX 3000 series graphics cards.
Wireless networks are typically associated with internet access in corporate networks or entertainment services like Netflix. Yet, WiFi’s application extends far beyond just streaming data to electronics. Now that a common household owns about 10 smart devices on average, it has set up WiFi sense to take the stage.
WiFi sense is a type of short-range passive radar technology, and it’s surprisingly accurate. It can easily pick up an object’s movement from room to room and zero in on gestures for activity classification. For large events, a sensor could be placed at the entrance to count visitors. Hospitals and elderly care facilities can use WiFi sensors to monitor patient movement and biometric data like heartbeats, breathing, and limb movements.
Simply put, WiFi sense measures how WiFi signals interact with movement. By pinging the environment, WiFi sense systems can easily track locations and movement based on how the signals are reflected and deflected.
WiFi sense systems communicate in either infrastructure mode or ad-hoc mode. In ad-hoc mode, each node in the sensing system communicates with a central access point (AP). In infrastructure mode, each of the nodes communicates with one another directly.
Topologies aside, WiFi sense can be active or passive. An active system sends a WiFi packet dedicated to sensing purposes. Conversely, a passive WiFi sense system appends WiFi sense data to existing WiFi traffic.
Since a passive system doesn’t send extra packets, it requires minimal processing overhead. Although active systems need higher computational power, it also has greater control over the transmission rate, bandwidth, beamforming and other environmental measurements.
Preliminary testing shows that WiFi sense performance is correlated to channel bandwidth. The larger the bandwidth, the higher the resolution. Channel bandwidth in the 2.4GHz spectrum is 20MHz, 5GHz is 160MHz, and 60GHz is 2GHz.
Motion sensing can be achieved using infrared and radar sensors. Patient monitoring can be driven by cameras plus AI, and smartphones can already detect gestures by amalgamating a time of flight sensor (ToF) with a standard camera. These existing solutions naturally beg the question: where does WiFi sense fit in all this?
The answer is clear cut; WiFi sense has an advantage over existing solutions in that for most applications, it does not need any extra hardware. Active radar systems require dedicated antennas and transceivers that are complex and costly. On the other hand, WiFi sense uses existing devices like cell phones, PCs, and mesh WiFi systems. The user would only need to install the required software to transform their setup into a WiFi sense.
“There are about 15 billion WiFi clients devices out there,” said Taj Manku, CEO of Cognitive Systems. “With this [WiFi sense], you can now enable all these devices, which are never meant to be motion sensors, to now be motion sensors. And that can then provide the user with other capabilities going forward, whether that’s home monitoring…or IoT integration for smart homes. You are doing this simply by software.”
WiFi also penetrates through walls, enabling out of line-of-sight (LOS) operations, an important consideration for security monitoring applications. And because it doesn’t rely on image data, it retains a degree of privacy.
For home applications, WiFi sense can be installed on virtually any WiFi device. Manku noted that the terms of motion sense, service quality doesn’t degrade much with the quality of the device.
“A lot of the very cheap devices, like the smart plugs, for example, they are just as good as a complicated device like an Alexa or Google Home,” said Manku.
Still, Manku noted that the software solution would evaluate every device to see if it has the necessary performance, but the baseline requirement is very low.
While the idea is promising, WiFi sense isn’t without challenges. WiFi signals, like any wireless transmission, are vulnerable to interference that decreases their accuracy. And if the WiFi equipment acting as sensors come under heavy traffic, the depleted resources could reduce service quality.
Coverage and signal strength is another consideration for out of LOS applications. As previously mentioned, WiFi sense works best with high-frequency, high-bandwidth transmission. But high frequencies have trouble penetrating walls. Thus, solution designers need to balance bandwidth and accuracy, rely on more sense nodes, or consider the sensor’s proximity to the target.
In enterprise scenarios like healthcare, the high resolution demands set more stringent hardware requirements. In addition to frequency and bandwidth criteria, the devices need higher processing power for active systems with large performance overheads.
Because the WiFi standard was developed with interoperability and backwards compatibility in mind, it makes it easier to layer extra functionalities on top. With that said, WiFi equipment manufacturers need to enable lower-level access and chipset firmware access to control data flow. Similarly, the operating system may also need lower-level access to network gear to allow a standardized application interaction.
“The first hurdle is that you have to be able to work with the WiFi chipset vendors,” Manku weighed in. “And there are many different chipset vendors: there’s Qualcomm, there’s Broadcom, there’s a bunch of them. When you start, you may start working with one, but then eventually, you have to start working with all of them.”
Manku said that Cognitive Systems is working with 17 chipset vendors today.
Equally important is how these solutions are tested and verified. Manku commented that while Cognitive Systems has its own testing facilities, other manufacturers may not have the same luxury. Thus, independent third-parties need to have a standardized testing method, and the industry needs a strong push to establish them.
When will it arrive?
Although WiFi sense is just beginning to gain traction, solutions built around it are already here. Cognitive Systems already have both software and hardware products that help capture motion sensing. It hopes to work with major internet service providers in Canada to help to differentiate their service packages.
Another example comes from the School of Electrical Engineering & Computer Science (SEECS) at the National University of Sciences & Technology in Islamabad, Pakistan. The study, titled Wireless Health Monitoring using Passive WiFi Sensing published in 2017, explored the potential of using WiFi sense to track tremors, falls, and breathing rates of the elderly. The study concluded that the system, developed by the university, had an 87 per cent accuracy in measuring breathing rate, 98 per cent accuracy in detecting falls, and 93 per cent accuracy in classifying tremor. Moreover, the study argued that the WiFi sense solution is low cost and is far less “cumbersome or even demeaning” than wearing monitoring bracelets, which is even more challenging for dementia patients.
South Korean semiconductor company SK Hynix is acquiring Intel’s NAND flash memory division for US$9 billion.
The acquisition, announced on Oct. 20, will see SK Hynix absorb Intel’s NAND SSD-associated IP and employees, as well as Intel’s NAND fab in Dalian, China. Although the purchase would undoubtedly expand SK Hynix’s NAND storage portfolio, SK Hynix will also gain Intel’s current customer base.
“I am proud of the NAND memory business we have built and believe this combination with SK Hynix will grow the memory ecosystem for the benefit of customers, partners and employees,” said Bob Swan, CEO of Intel, in a press release. “For Intel, this transaction will allow us to further prioritize our investments in differentiated technology where we can play a bigger role in the success of our customers and deliver attractive returns to our stockholders.”
The SK Hynix press release also explained that Intel intends to focus on AI, 5G, but the move to sell its NAND production can be seen as a move to focus on its core products like processors. With that said, Intel will retain its Optane 3D XPoint storage-class memory technology and stay in the storage business.
Intel’s Non-Volatile Memory Solutions Group (NSG) has fallen on hard times. In Intel’s Q1 2019 earnings call, Swan noted that its memory business fell 12 per cent due to NAND’s pricing pressures, low demand, and deteriorating average sale price.
“We got to generate more attractive returns on the NAND side of the business,” Swan said in the call. “And the team is very focused on making that a reality. And to the extent there is a partnership out there that’s going to increase the likelihood and/or accelerate the pace, we’re going to evaluate those partnerships along the way so it can be enhancing to the returns of what we do in the memory space.”
SK Hynix will receive an initial US$7 billion payment. The remaining US$2 billion will be paid upon the final closing in March 2025. Intel will retain all IPs related to the manufacturing and design of NAND flash wafers until the final closing.
A trip down the memory lane
Intel first partnered with Micron Technologies in 2006 to produce solid-state drives under Intel Micron Flash Technologies (IMFT) banner. As part of their partnership, Intel purchased Micron’s NAND at cost. Products from their partnership included SSDs for both enterprises and consumers.
In 2015, Intel and Micro created 3D XPoint flash storage-class memory, a non-volatile memory that was much faster and durable than traditional NAND flash storage. The technology was sold under the Optane and QuantX SSD brands. In the same year, Intel announced that it would build its own NAND fabrication plants in Dalian, digressing from Micron’s NAND division. The pair continued to collaborate on 3D XPoint.
Intel and Micron eventually ended their 3D XPoint partnership in July 2018. Soon after, Micron expressed interest in purchasing Intel’s final IM Flash fab in Utah to produce 3D XPoint chips in October 2018. The deal closed in late 2019. As a part of the deal, Micron had promised to sell 3D XPoint chips to Intel while it figures out a transition plan.
There could be a good reason why Intel didn’t turn its Dalian 3D NAND fab into a 3D XPoint fab. In a comment to Blocks & Files, analyst Jim Handly said Intel’s 3D XPoint is very unprofitable to produce. He estimated that Intel lost $2 billion on 3D XPoint in 2017 and 2018, and $1.5 billion in 2019. That’s unsurprising, however, as 3D XPoint memory production is nowhere near as mature as 3D NAND.
Given Micron and Intel’s extensive history, Micron’s seeming disinterest in Intel’s NAND business came as a surprise.
Intel and Micron were not immediately available for comment.
The Canadian Radio-Television and Telecommunication Commission (CRTC) is granting Space Exploration Technologies Corp., also known as SpaceX, a Basic International Telecommunications Service license (BITS).
In an Oct. 15 letter to Bret Johnsen, chief financial officer of Space Exploration Technologies, the CRTC said it granted the license “after consideration of the comments received.” A BITS license allows a company to provide international telecommunication services but does not allow it to operate as an internet service provider within the issuing nation.
SpaceX had filed a request for a BITS license in June 2020 with the aim of eventually providing internet service to remote areas in Canada.
The CRTC noted that it has received 2,585 interventions regarding the company’s BITS application.
The CRTC’s letter clarified that “all entities who provide services as a facilities-based carrier must at all times comply with the appropriate regulatory framework, including the ownership and control requirements of section 16 of the Act and the Canadian Telecommunications Common Carrier Ownership and Control Regulations.”
SpaceX’s Starlink program operates a network of low-orbit satellites that can beam internet to anywhere on Earth. On Sunday, the company launched 60 more satellites, expanding its fleet count to 835. There’s still a ways to go for ubiquitous global coverage, however, as SpaceX wants to launch at least 120 new satellites every month and eventually deploy 12,000 satellites to low-orbit.
SpaceX claimed that its Starlink program can provide 100Mbps downlink with a 30ms latency. At this speed, it would take about a second to download a 10MB file. Although satellites can deliver internet to areas that don’t allow for radio towers, they’re also more susceptible to interference from inclement weather.
Above, a row of Starlink Satellites march through the night sky over Leiden, Netherlands in 2019. Video credit: Marco Langbroek, Vimeo.
The wait is no more. AMD lifted the curtains off of its Ryzen 5000 series processors and its Zen 3 core architecture at its “Where Gaming Begins” conference yesterday.
Zen 3, like Zen 2, uses a separate CPU die and I/O die on the same package. The CPU die is manufactured using TSMC’s 7nm node, while the I/O die is made on the 12nm node. Separating the I/O die and CPU dies help with yields as smaller chips are easier to manufacture.
AMD’s Zen 3 architecture continues to improve on Zen 2’s modular design. In the previous generation, AMD’s Ryzen 3000 series desktop processors used multiple core chiplet dies (CCD) consisting of two 4-core compute core complexes (CCX) connected over the Infinity Fabric Interconnect. Each CCX had its own, separate 16MB cache. With Zen 3, AMD has unified the pools of cache into a single 32MB pool, thus decreasing latency and increasing resource sharing across the cores.
AMD has also reworked the front to back operation stack, including widening the integer and floating-point execution units (EU), load/storage operations, and the branch predictor. AMD also announced a feature called “Zero Bubble” that hides latency, a major roadblock of previous generations of Zen architectures.
All-in-all, AMD claims that Zen 2 will have 19 per cent higher instructions per clock while being 24 per cent more power-efficient compared to Zen 2. Further, it underscored Zen 3’s single-threaded performance by demonstrating the Ryzen 9 3900X achieving 631 points in Cinebench R20, a rendering benchmark. The company claimed that this is the first processor to break the 600 points barrier in the benchmark.
The initial launch lineup on Nov. 5 will have four SKUs: the Ryzen 9 5950X, Ryzen 7 5900X, Ryzen 7 5800X, and the Ryzen 5 5600X.
AMD Ryzen 9 5950X
AMD Ryzen 9 5900X
AMD Ryzen 7 5800X
AMD Ryzen 5 5600X
4.9GHz / 3.4GHz
4.8GHz / 3.7GHz
4.7GHz / 3.8GHz
4.9GHz / 3.4GHz
There is a price hike this time around. All Ryzen 5000 series processors cost $50 more than the products they’re designed to replace. Moreover, AMD has pulled the stock coolers from the Ryzen 7 5900X and the Ryzen 7 5800X which, in the previous generations, granted it a big value lead compared to Intel processors.
For the first time in a long time, AMD processor prices have outpaced Intel’s. At MSRP, the Ryzen 7 5800X costs US$125 more than Intel’s Core i7-10700K ($499 vs. $374), which brings it into price parity with Intel’s consumer flagship CPU, the 10-core Core i9-10900K. The price difference narrows on the mid-range, though, with the Ryzen 5 5600X being just $37 more than the Core i5-10600K (US$299 vs. US$260). The increased prices hint at AMD being confident that its products will win against Intel’s Comet Lake-S. With that said, Intel’s Rocket Lake processors–built on Intel’s 14nm process–is expected to land March 2020 to challenge the market.
Finally, the Ryzen 5000 series will be the last generation of processors to use the AM4 socket. The socket is now 4 years old and has lasted through three generations of Ryzen processors.
Motherboards with AMD’s 500 series chipsets will natively support the new Ryzens. Motherboards with the 400 series chipsets, however, will have to wait until January for the new beta BIOS to be released. The upgrade is forward only for X470 and B450 motherboards, meaning that once the BIOS is flashed, they will no longer support older generations of Ryzen processors. To avoid a “no-boot” situation, users will need to provide proof that they’ve purchased a Zen 3 desktop processor and a 400 series motherboard before they can download the BIOS.
AMD has also confirmed that its 5nm products are on track and that its graphics solutions built on “Big Navi” will be announced on Oct. 21.
Read our previous coverages for backgrounders on the topics covered here.