At the Build 2020 conference, Microsoft announced Project Reunion, rolling its Windows desktop API and the universal windows platform (UWP) into a single package.
In its developer blog post, Microsoft defined four focus areas for app development in the coming years:
Unify app development across the billion Windows 10 devices for all current and future apps;
Leaning into the cloud and enabling new scenarios for Windows apps;
Creating new opportunities for developers to build connected apps using Microsoft 365 integration in the Windows experience; and
Making Windows great for developer productivity.
Project Reunion plays into the first point. It combines desktop app libraries and UWP libraries, given them the ability to communicate and control elements within each other. This unification enables developers to more easily create apps with better interoperability across device types. In addition, it lets developers update existing applications with new functions.
Microsoft introduced the Universal Windows Platform (UWP) in 2016 to attract developers to the then-barren Windows Store. The main goal back then was to provide a common app platform on every device that runs Windows 10. To achieve this goal, Microsoft introduced a common UWP core API that’s identical with Windows 10 devices like desktop, Xbox, IoT, and so on. Cross API compatibility is achieved through API bridges that translate UWP API calls to apps built on Android and iOS.
Win32, on the other hand, is a Windows API that exposes Windows components –Windows shell, user interface, network services and so forth–to the developer. Nearly all Windows desktop applications use Win32 to some extent.
In recent years, Microsoft has been working to add UWP into platforms that were previously incompatible. That effort eventually led to Project Reunion, finally melding the two together into a decoupled API that can be acquired through platform-agnostic package managers like NuGet.
As 5G deployment plods along in Canada, the next-generation wireless standard has already been adopted by healthcare practitioners in China. At the Huawei Global Analyst Summit 2020, Dr. Lu QingJun, director at China-Japan Friendship Hospital and a full-time remote healthcare practitioner, shared his thoughts on the impact of the higher quality networks on hospitals of the future.
Lu gave a personal example by describing one of his previous remote cases at a primary care hospital. In his scenario, the patient had to wait for 25 hours to receive a consultation, due in large part to the 12GB of data that had to be sent over the network. Lu said that with 5G, that time can be cut to just “dozens of minutes”. The dataset is amplified for patients who need multiple tests, such as CT scans and electrocardiograms.
When describing telemedicine, Lu precited that data, technology, and intelligence will become inseparable from healthcare. Although the course has been set, Lu also noted the perpetual battle to improve privacy and secure data transmission, all of which require new infrastructure for the intelligent hospital.
“We’ve always said that it’s not necessary to replace 4G with 5G in all cases, so we need to identify those cases where only 5G is able to support,” said Lu, noting that the introduction of technology built on 5G should not impede the efficiency of existing workflows.
The conversation then naturally leads to whether existing technologies like fibre internet could fill these roles.
“Hospitals already have fibre access, so do we actually need 5G?” Lu asked rhetorically. “You only say that because you don’t understand 5g…we need mobility, but not only that, we need to upgrade our equipment and currently our equipment is wired.”
Network infrastructures will be the backbone to facilitate new communication demands. Thus, its development needs to keep pace with the ICT industry. Because telemedicine is still relatively new, the industry needs to generate new scenarios as testbeds for these newer technologies, Lu explained. These new use cases, whether they’re generated naturally by demand or synthetically, will help push along the development of these new technologies.
For example, 5G’s bandwidth massive bandwidth improvements could remove the bottleneck present in real-time communication and medical imaging. Increased bandwidth enables more immediate, higher quality remote checkups. It could also simplify the diagnostic process by enabling services like real-time remote full-body scanning, a procedure that generates large image files.
Another factor that affects performance is latency. The ITU-R defined Ultra-Reliable Low Latency Communications (URLLC) as one of 5G’s main applications. In a highly-technical and mission-critical application like healthcare, low latency is a key concern.
“The 4G technologies are not enough to meet our needs,” Lu pointed out. “In the past, we compressed the data to make it fit into the smaller pipe. And the 4G latency was not acceptable. For 5G, the latency is very low. It’s almost a real-time so the doctors can get real-time data transfer to provide better services to the patients, especially when we talk about the complex and difficult.”
He specified remote monitoring, remote analysis, remote robotics, and remote visit as crucial areas of focus. He said that while doctors understand the benefits of remote practices, vendors are not yet prepared to manufacture this equipment due to inadequate certification and qualifications.
There are more than 13,000 secondary–or specialist–hospitals in China, and adding telemedicine capabilities to them all would incur significant cost. With that said, developing remote healthcare also stimulates new business opportunities for carriers.
Moreover, Lu said that the entire network stack–the slices, transport network and edge computing could all benefit from being supported by 5G technologies. The benefit isn’t limited to telemedicine but the communication industry as a whole.
In addition, 5G could help to streamline a hospital’s logistic operations like payment. China’s mobile payment system is the most established in the world by far. In 2019, over 81 per cent of the country’s smartphone owners frequently pay through proximity mobile systems such as QR codes. But while China’s digital commerce is being developed at an explosive pace, hospitals of the future will demand more robust transaction support.
“We need to have innovation in the healthcare service provision,” said Lu. “And and we also need to have some payment assurance like basic medical insurance, commercial insurance, and also some banking services support. And that has high requirements on computing on storage and on data processing. These requirements will only be satisfied by adding new ICT technologies.”
After receiving waves of backlash from its users, AMD announced support for its upcoming processors based on the Zen 3 microarchitecture for the X470 and B450 series motherboards, retracting an earlier decision to omit these platforms for these future products.
In a Reddit thread, AMD said that it’s working with motherboard partners to develop basic input-output systems (BIOS) versions that would enable support for Zen 3 processors on X470 and B450 motherboards.
Once flashed onto the motherboard, the new BIOS would disable support for older generation Ryzen processors to free up space for new BIOS codes. The upgrade is one-way, meaning that users cannot revert back to an older BIOS version once the upgrade is complete. To avoid a “no-boot” situation, users would need to provide proof that they’ve purchased a Zen 3 desktop processor and a 400 series motherboard before they can download the BIOS.
Earlier this month, AMD published a blog post announcing that the fourth generation Ryzen processors would not be compatible with 400 series motherboards despite using the same AM4 socket. The company had previously promised to support the AM4 socket “until 2020”, but never specified an exact date for its retirement.
In the initial blog post, AMD cited BIOS size constraints to be the limiting factor. The blogpost explained that at a maximum of 16MB, the read-only memory (ROM) used to store the BIOS is too small to hold the code necessary to support the new processors.
The hardware community immediately criticized the move. Users who had hoped to upgrade in the future were especially vocal. Because AMD delayed its affordable mainstream B550 motherboard chipset, many new entrants to AMD had to purchase 400 series motherboards as it’s the most affordable entry point to the platform. In addition, many blamed AMD for failing to communicate that new processor support would be a feature for 500 motherboards and that it would have affected their purchasing decision.
Furthermore, many dismissed AMD’s reasonings and argued that motherboard manufacturers could simply add more ROM. Others called for the company to trim support for older processors to make room for the new codes.
AMD noted that the availability of the new BIOS will vary and may not coincide with the Zen 3 processor launch.
Intel recently released its 10th gen vPro desktop and mobile businesses, bringing a bevy of management and security features along with improved performance.
In total, Intel launched 27 SKU across its mobile and desktop Core i5, Core i7, and Xeon ranges. All announced processors are ones based on the Comet Lake architecture instead of Ice Lake. Interestingly, several vPro processors have unlocked multipliers for overclocking, as denoted by their “K” suffix. While overclocking capabilities are interesting for enthusiasts, business owners care little for them. They favour a product’s consistency and reliability over tunable performance.
Intel’s vPro platform is a portfolio of both quality assurance and hardware features. vPro-certified processors have higher quality, carry hardware security features for low-level protection and more robust remote management. They also undergo a rigorous validation process to ensure that they’re compatible with new technologies. The vPro platform also sets criteria outside of the processor by requiring specific chipset and high-end memory I/O components like Optane memory.
Intel’s 10th gen vPro processors also bring implications for Project Athena, Intel’s new standard for mobile laptops. Previously, Athena-certified business laptops like the HP Elite Dragonfly had to rely on Intel’s 8th gen vPro processors. The release of the 10th gen vPro processors will replace them in future Athena business laptop designs.
Intel 10th gen vPro processors will be coming to products from HP, Dell and Lenovo among others.
During the Nvidia GPU Technology Conference today, Nvidia CEO Jensen Huang revealed the Nvidia EGX A100 converged accelerator powered by the company’s next-generation Ampere graphics processing unit (GPU) architecture.
Though the Ampere GPU architecture is still shrouded in mystery, it has been confirmed that it will be built using TSMC’s 7nm transistors. Ampere is considered to be a major architectural redesign from the current Volta architecture.
Ampere’s first product, the A100, will strictly target heavy workstation workloads such as simulation, rendering, machine learning, and cloud virtualizations. The particular GPU on the A100 consists of 54 billion transistors and new features like new security engine, third-gen Tensor cores with new Floating Point 32 precision. The A100 also integrates the Nvidia Mellanox CoonnectX-6DX network adapter onboard.
“By installing the EGX into a standard x86 server, you turn it into a hyper-converged, secure, cloud-native, AI powerhouse, it’s basically an entire cloud data centre in one box,” said Huang.
Complementing the EGX A100 is Nvidia’s EGX cloud-native AI platform with a focus on remote management and secure data processing.
The A100 is also designed with scalability in mind. With the multi-instance GPU (MIG) feature, a single A100 can be partitioned into up to seven independent GPUs, each with its own dedicated resources. Or, several A100 servers can act as a single GPU by connecting through Nvidia’s NVLink.
On its product page, Nvidia claims that the A100 can deliver up to six times higher performance for training and seven times higher performance for inference compared to Volta, Nvidia’s previous architecture.
The Nvidia EGX A100 is in full production and shipping to customers worldwide. Expected system integrators include Amazon Web Services (AWS), Cisco, Dell Technologies, Google Cloud, Microsoft Azure among others. More details on the Ampere architecture will be revealed on Tuesday, May 19, at Nvidia’s GTC virtual event.
AMD is targeting the pros with the announcement of its Radeon Pro VII workstation graphics card on May 13.
Based on the Vega 20 GPU, the Radeon Pro VII graphics card features 60 compute units (CUs), four fewer than the full Vega 20 GPU on the consumer Radeon VII graphics card. It comes with 16GB of ECC high-bandwidth memory (HBM) capable of reaching 1TB/s bandwidth. The card also communicates over the PCIe 4.0 bus, which has double the throughput as PCIe 3.0.
The AMD Radeon Pro VII excels at double-precision floating-point number crunching, offering 6.5 tera floating-point operations per second (TFLOPS) in FP64. With the Radeon Pro VII, AMD aims to offer an affordable option for design and simulation professionals working with high-precision workloads. Simultaneously, it hopes to capture the attention of VFX and media production teams with its 16GB memory buffer useful for holding high-res media assets.
One neat AMD’s exclusive feature is ProRender 2.0. Typically, the rendering process is done either through GPU or the CPU. ProRender 2.0 renders the CPU and GPU simultaneously to cut down on render time. It’s compatible with AMD Threadripper processors, as well as its consumer-oriented Ryzen 9 and 7 platforms. Applications with ProRender plug-in support include Unreal Engine, Autodesk Maya, SideFX Houdini, Blender among others. AMD made ProRender SDKs available under Apache Licence 2.0 to shaving off some back-and-forth legal headaches for developers looking to implement them into their software.
Radeon Pro VII comes with six DP1.4 ports for multi-panel synchronized high-resolution output. A typical use case would be a large scale, multi-panel digital signage, or filming using synchronized LED backdrops. By attaching up to four Radeon Pro VII to an AMD FirePro S400 sync module, up to 24 displays can work in sync as a common output.
The AMD Radeon Pro VII is available in June for US$1,900 (around CA$2,660) through Memory Express and Newegg Canada.
The ubiquity of 5G will cover everything from IoT sensors, to smart devices, to cloud communication. But the technology that spawned from 5G development can extend well beyond just global networks. At IBM Think 2020, MIT Professor Muriel Médard spoke about how satellites can also benefit from the development of 5G.
One of 5G’s plethora of features is a coding technique called Random Linear Network Coding (RLNC). Médard defined network coding as a “mathematical manipulation of data that to be reliably retrieved, reliably represented and transported in a network”.
In essence, through complex encoding and decoding techniques, RLNC can reassemble lost packets in a data stream by the receiver. This reduces the need to resend data when they become lost. It can increase reliability when sending sensitive information like financial data, as well as be applied to monitor sensors and vehicles in remote areas.
As a backgrounder, to transmit large quantities of data between two devices, the information must first be cut up and encapsulated into packets. Sending data via small packets provides many benefits, including higher efficiency and increased reliability. If data becomes corrupted or lost during transmission, only the affected packets need to be resent rather than the entire dataset.
In urban centres, radio towers are relatively near the user, thus creating stronger signals that are more resistant to environmental factors. In satellite networks, however, the long-distance between the sender and the receiver renders it vulnerable to disruptions from inclement weather. In addition, high latency compounds the finicky signal; if data becomes lost during transit, it will take longer to resend.
Despite its shortcomings, underserved communities in Canada and around the world rely on satellite to stay connected. Due to geographical and business limitations, it’s not always feasible to pull landlines and install towers to these locations. It’s critical for satellite network technologies to advance in parallel with the networks back on the ground.
Robert Hallock, AMD technical market lead, confirmed that AMD’s upcoming Zen 3 processors will not work with motherboards with 400 series chipsets and older.
In a blogpost, Hallock confirmed that Zen 3 processors will continue to use the AM4 socket, but will only be backwards compatible with AMD’s X570 and B550 motherboards. While 500 series motherboards would only require a BIOS update to enable compatibility, users on older platforms would need to purchase a new motherboard.
“AMD has no plans to introduce ‘Zen 3’ architecture support for older chipsets,” Hallock wrote. “While we wish could enable full support for every processor on every chipset, the flash memory chips that store BIOS settings and support have capacity limitations. Given these limitations, and the unprecedented longevity of the AM4 socket, there will inevitably be a time and place where a transition to free up space is necessary—the AMD 500 Series chipsets are that time.”
the AM4 socket was announced alongside the first generation Ryzen processors in 2016. When it was released, AMD had promised to support the AM4 socket until 2020. Because the socket has yet to reach end-of-life, users of AMD’s older platforms hoped to be able to upgrade to AMD’s 4th generation Ryzen processors once they arrive. Zen 3 will be the first time where a Ryzen processor isn’t backwards compatible with all three generations of AMD’s platforms (assuming the motherboard vendor provides the BIOS that supports them as well). Up until now, all AMD motherboards are compatible with most processors from all three generations of Ryzen processors.
Although AM4 is nearing its obsolescence, AMD has yet to announce its retirement or successor. The company is looking to cement its future processor development before making an announcement.