Category: PC & Components

Get the Latest headlines on computer hardware news, and reviews, including analysis, features, and opinions on the best personal computers news stories, photos and explore more on Desktops and PCs

  • Which Port should we use on a gaming PC: DisplayPort or HDMI?

    Which Port should we use on a gaming PC: DisplayPort or HDMI?

    Modern personal computers can generate a lot of content. To display this content on computer monitors or TVs, we need screen interfaces. These interfaces are standardized standards that transmit video and audio content.

    The most commonly used ports for connecting personal computers to display screens are DisplayPort and HDMI. Recently, the versatile USB Type C has also become popular, with special modes that support previous technologies. All of these ports have multiple uses, but which one is best for a gaming PC? We’ll answer this question by analyzing the features and advantages of each interface.

    Display Port

    DisplayPort was introduced as the most recent major display interface standard in 2008 by the VESA organization. Its main purpose is to transmit content, including video, audio, and data, between a PC and one or multiple monitors. The connector of DisplayPort has 20 pins (32 in internal connectors for laptops) and features a small mechanism to ensure a secure fit in devices. You may come across full-size or reduced-size connectors, also known as Mini DisplayPort (MiniDP or mDP), which offer the same functionality.

    Currently, the most commonly used versions of DisplayPort are 1.3 and 1.4, providing a bandwidth of up to 32.4 Gbps. This high bandwidth allows for native screen resolutions up to 8K, with a resolution of 7,680 x 4,320 pixels. The audio signal supported by DisplayPort can handle a maximum of 8 uncompressed channels, with a sampling rate of 192 kHz and a bit depth of 24 bits. Additionally, DisplayPort supports optional Digital Restrictions (DPCP) with 128-bit AES encryption and has been compatible with the widely used digital content protection standard, HDCP, since revision 1.1.

    DisplayPort has different maximum supported video frequencies depending on the version. All versions support 144 Hz at 1080p resolution. Version 1.2 supports 144 Hz at 2K resolutions. Version 1.3 can handle up to 120 Hz at 4K or 8K with a refresh rate of 30 Hz. Version 1.4 enables a refresh rate of 144 Hz in 4K through Display Stream Compression (DSC), and it supports up to 60 Hz at 8K resolution with HDR.

    The latest version of the DisplayPort standard is DisplayPort 2.0. This version significantly increases the bandwidth to an impressive 77.37 Gbps. It supports refresh rates of 144 Hz in 4K, raises the refresh rate for 8K to 120 Hz, and introduces support for 16K resolution (at 30 Hz), which HDMI does not yet support. DisplayPort 2.0 is currently the most advanced screen interface available in the industry, although its adoption is not yet widespread.

    HDMI

    The High-Definition Multimedia Interface (HDMI) is an industry-standard that was introduced in 2002 as a replacement for the older scart connector. It allows for the transmission of uncompressed, encrypted, high-definition video and uncompressed multi-channel audio through a single cable. HDMI offers several advantages, including features like HDMI-CEC (HDMI Consumer Electronics Control), which enables the control of multiple devices with a single remote. Its primary focus extends beyond PC use and is the preferred interface for connecting multimedia devices to large screens, such as televisions.

    The standard HDMI connector, known as Type A, consists of 19 pins. There is also a Type B connector with 29 pins, which supports an expanded video channel for high-resolution displays. HDMI connectors are available in full size (Type A), as well as smaller variants with the same features, known as mini-HDMI (Type B) and micro-HDMI (Type C). One of the clear advantages of HDMI is its widespread adoption, as it can be found on various types of devices.

    However, there are a few disadvantages to consider. The HDMI connector is generally less sturdy compared to previous screen standards like VGA, making it more prone to accidental disconnections and potential physical or electrical failures. Additionally, there has been criticism regarding the inclusion of digital content protection (HDCP), which manages “digital restrictions” and prevents users from copying transmitted content.

    Since its introduction, HDMI has undergone several revisions. The most widely used versions at present are 1.4 and 2.0. These versions addressed the bandwidth limitations of previous iterations, achieving data transfer rates of up to 18 Gbps/s, which enables 60 FPS in 4K resolution and 144 Hz in 1080p resolution. HDMI 2.0 also introduced significant improvements in other areas, such as support for high dynamic range (HDR) and increased color depth up to 10 and 12 bits. The advantage is that HDMI 2.0 remains compatible with previous versions at the wiring level, allowing users to benefit from its features using older cables.

    HDMI 2.1 is the most recent major version of the standard and represents a significant milestone. It offers a substantial increase in maximum bandwidth, reaching up to 48 Gbps/s. This enables support for resolutions up to 8K and 10K at 60Hz, as well as 120Hz refresh rate in 4K. Additionally, HDMI 2.1 introduces dynamic HDR support across all resolutions and includes features like variable refresh rate, enhanced audio return (eARC), and support for Dolby Atmos and DTS:X audio formats. Despite being announced in 2017, the widespread adoption of HDMI 2.1 has been limited, although most mid-range and higher-end monitors released in 2023 support this version.

    It is worth mentioning that a new revision called HDMI 2.1a has been announced. It introduces a new feature called Source Based Tone Mapping (SBTM), which enhances HDR dynamic range technology. SBTM allows the video source, such as a PC or game console, to handle HDR tone mapping on a display screen like a monitor or television. This feature is particularly relevant for gaming, as it enables devices to combine HDR, SDR, and dynamic HDR graphics to create visually compelling images as intended by game developers.

    Choosing Between DisplayPort and HDMI for PC Gaming

    The latest versions of both DisplayPort and HDMI are highly advanced, catering to the needs of most consumers. Graphics cards, whether dedicated or integrated, offer outputs for both interfaces, while modern monitors are equipped with corresponding inputs. High-end monitors support the most advanced versions of these interfaces. Various types of cables and adapters are available to convert between the two, but using adapters may result in some loss of benefits.

    When it comes to gaming PCs, there is no doubt that DisplayPort is the top choice and the preferred port for gamers. While HDMI 2.1 has made significant progress in terms of features, it still falls behind DisplayPort in several aspects, such as bandwidth and supported resolution.

    Additionally, if you are using an NVIDIA graphics card and have a monitor that supports the proprietary G-Sync image synchronization technology, the choice becomes even clearer in favor of DisplayPort. This is because NVIDIA does not support HDMI for G-Sync setups.

    The same applies if you plan to set up a multi-screen configuration. DisplayPort stands out from other interfaces as one of its advantages is the ability to output video content to multiple displays using Multi-Stream Transport (MST) technology. DisplayPort can be “split” through hubs and displays can be connected in a daisy chain fashion. HDMI lacks this capability due to its concept and design.

    In summary, when it comes to gaming PCs, DisplayPort should be the preferred interface. It is specifically designed for personal computers and offers more advanced features and possibilities compared to HDMI, which is more suitable for connecting multimedia devices to larger screens like televisions or video game consoles when DisplayPort is unavailable.

  • Exploring Secure Boot: Enhancing Security with UEFI’s Essential Feature

    Exploring Secure Boot: Enhancing Security with UEFI’s Essential Feature

    Today, I want to discuss Secure Boot, a security feature that comes along with UEFI. It has faced significant criticism from non-Microsoft circles. Many believe that Microsoft used Secure Boot as a strategy to diminish the presence of alternative operating systems in the market, particularly Linux and FreeBSD, which are major competitors to Windows in terms of compatibility.

    When it comes to Secure Boot, there are two distinct groups with different perspectives on this feature. On one side, we have Windows users who never had to worry about compatibility issues because Microsoft has a strong hold on hardware manufacturers.

    On the other side, we find alternative operating systems, with Linux being the primary focus. In the Linux community, opinions on Secure Boot are varied. Simplifying the situation, we can identify three main camps: those who outright reject Secure Boot, those who have concerns about its management but not the concept itself, and those who fully embrace it.

    The implementation of Secure Boot in Linux has been marred by controversy and strong disagreements. For instance, Ubuntu discovered that its Secure Boot implementation was not secure as it bypassed the boot feature. Additionally, the inclusion of the Lockdown module in the Linux kernel development was a subject of debate for seven years. The key point of contention was whether or not to bind Secure Boot. Notably, Linus Torvalds, the creator of Linux, was against it, while Matthew Garrett, the creator of Lockdown, supported its inclusion.

    Controversy and Challenges Surrounding Secure Boot in Linux

    The use of Secure Boot in Linux has been far from smooth sailing, resulting in frustration and anger within the Linux community. One notable incident occurred in Ubuntu, where it was discovered that the implementation of Secure Boot was flawed. This flaw arose because it bypassed the boot feature, compromising its overall security. Moreover, the inclusion of the Lockdown module in the Linux kernel development was a topic of debate for a staggering seven years. The main point of contention was whether Secure Boot should be bound to the module or not. It’s worth mentioning that Linus Torvalds, the renowned creator of Linux, expressed his opposition to this idea, while Matthew Garrett, the creator of Lockdown, supported its integration.

    More recently, we have witnessed the advancements made by systemd, a framework that forms the foundation of many popular Linux distributions. This framework is now venturing into adopting security mechanisms implemented at the motherboard level, such as the TPM (Trusted Platform Module) required by Windows 11. This development raises questions about whether Linux will eventually require TPM for its functioning. However, at present, the Linux community remains steadfast in their determination to prevent such a requirement from becoming a reality.

    As we delve into the world of Secure Boot, it becomes evident that the real action lies beyond the realm of Windows. In fact, Microsoft’s role in this domain can be likened to that of a skilled soccer referee – when performing its duties effectively, it goes unnoticed and attracts minimal attention.

    What is Secure Boot?

    Secure Boot is a widely adopted security standard developed by the PC industry. Its primary objective is to ensure that only trusted software, approved by the manufacturer, runs on a computer.

    When you power on your computer, the motherboard firmware plays a crucial role. It meticulously verifies the signature of each software component present on the system, including UEFI firmware drivers, EFI applications, and the operating system itself. This signature check is essential in determining whether the software can be trusted. If the signature matches the expected one, the computer completes the boot process, and the firmware grants control to the operating system.

    It’s important to clarify that Secure Boot doesn’t involve data encryption or rely on the TPM (Trusted Platform Module). However, it can work harmoniously with the TPM, which is a module mandated by Windows 11 for specific security features. Essentially, Secure Boot’s main purpose is to ensure that only software with the necessary signature is authorized to run on the computer, thus enhancing overall security.

    Windows 8 and the UEFI Revolution: A Turbulent Chapter in Microsoft’s Journey

    Windows 8, a converged operating system that failed to win over users, stirred up controversy during its reign. One of its most notable and memorable features was Modern UI, initially known as Metro during its development phase. This tile-like interface for the Start menu received widespread criticism and failed to resonate with the majority of users. The negative reception of Modern UI played a significant role in limiting the adoption and popularity of the operating system, ultimately leading to Steve Sinofsky’s departure from Microsoft.

    Another contentious aspect of Windows 8 was its app store. Many accused Microsoft of attempting to monopolize software distribution through a channel entirely controlled by the company. This sparked vocal opposition from various quarters, including Valve and Epic Games. Interestingly, in a twist of fate, these two video game companies would later become bitter rivals, with Epic Games launching its own store in 2018.

    However, the most crucial aspect of this post revolves around Microsoft’s demand that OEMs (Original Equipment Manufacturers) adopt UEFI and enable Secure Boot to obtain Windows 8 certification. This unexpected move by Microsoft caught most Linux distribution developers off guard. They not only lacked support for Secure Boot, which could be disabled in most cases, but they also faced difficulties booting on UEFI due to GRUB, the widely used bootloader, lacking support for this firmware interface.

    Microsoft’s imposition of UEFI and Secure Boot requirements served as a significant catalyst for the development and adoption of these technologies. Nevertheless, it also led to an unsuccessful lawsuit against the Redmond giant. It is worth noting that x86 machines, where Secure Boot cannot be disabled, are not commonly encountered.

    Over time, significant developments have transformed the landscape surrounding Linux and its compatibility with Secure Boot. Several distributions, such as Ubuntu and Fedora, now offer robust support for Secure Boot, marking a positive shift. However, users who rely on NVIDIA graphics cards with their official drivers still encounter the need to disable Secure Boot for proper functionality.

    Despite these advancements, a considerable number of Linux users still perceive UEFI and Secure Boot as technologies controlled by Microsoft, serving only its interests. Consequently, many opt to disable Secure Boot to use their preferred operating system without restrictions.

    While the introduction of UEFI has raised questions about its origins, not everything associated with this firmware interface is negative. In fact, it has brought some significant benefits. For instance, UEFI has standardized the GPT partition table, replacing the outdated MBR system. It has also improved the integration between operating systems and motherboards, or their firmware. In the Linux realm, UEFI has paved the way for fwupd, a daemon that facilitates firmware updates for various hardware components, ranging from computers to peripherals. Although support for fwupd is gradually improving, only a limited number of manufacturers currently offer compatibility with this framework.

    Despite the lingering skepticism and challenges, the progress made in Linux’s relationship with UEFI and Secure Boot showcases a dynamic landscape that continues to evolve. With ongoing developments and growing support, the future holds the promise of even greater compatibility and usability for Linux enthusiasts

    Differentiating UEFI and Secure Boot: Understanding Their Relationship

    The criticism directed towards Secure Boot has often extended to UEFI, leading many to perceive them as synonymous. However, it is essential to recognize that UEFI acts as a framework encompassing various features, with Secure Boot being one of them. Furthermore, while Secure Boot can be disabled, it is mandatory for the operating system to support it unless some form of legacy support is available.

    In essence, UEFI (Unified Extensible Firmware Interface) refers to a set of specifications formulated by the UEFI Forum. These specifications outline the firmware architecture of a platform and define its interface for interaction with the operating system. The primary objective behind UEFI was to replace the aging BIOS (Basic Input/Output System) while ensuring compatibility, at least for a transitional period. Notably, the original specification was initially known as EFI (Extensible Firmware Interface) and was developed by Intel.

    As for Secure Boot, I have previously provided its definition in earlier sections. In summary, it is a mechanism responsible for verifying the authenticity and integrity of software before its execution on a computer. This verification process relies on digital signatures and ensures that only authorized and trusted software can run on the system.

    Understanding the distinction between UEFI and Secure Boot helps to clarify their individual roles and sheds light on how they work together to enhance system security and compatibility.

    Understanding the Inner Workings of Secure Boot

    To grasp the inner workings of Secure Boot, visualizing it through a diagram proves more effective than mere words. A critical aspect of this security feature lies in the databases it utilizes: the Allow DB (DB) and Disallow DB (DBX).

    The DB database serves as a repository for trusted loaders and EFI applications. It stores hashes and keys associated with these trusted components, allowing the firmware to authorize their execution during the boot process. On the other hand, the DBX database is responsible for housing compromised, revoked, and untrusted keys and hashes. If a code is signed with a key listed in DBX or its hash matches an entry in DBX, the boot process will be halted to prevent its execution.

    The diagram below illustrates the process undertaken by Red Hat Enterprise Linux to comply with the various steps and requirements of Secure Boot. It is worth noting that this diagram should closely resemble the process followed by other distributions, particularly those utilizing systemd.

    Demystifying Secure Boot: How It Works in Simple Terms

    Secure Boot, the security feature in modern systems, follows a step-by-step process to ensure the authenticity of software components. Let’s break it down into easily understandable steps:

    1. Certificate Check: First, it verifies if the public certificate is present in the Allow DB database. If it finds the certificate, it proceeds to the next step.
    2. Bootloader Check: The bootloader, typically GRUB 2, undergoes examination. It validates the bootloader’s signature to confirm its authenticity. If the signature is valid, it moves on to the next stage.
    3. Kernel Check: Next, the kernel (Linux) is inspected. Its signature is examined to ensure it hasn’t been tampered with. If the kernel’s signature is valid, the Secure Boot sequence is successfully completed, allowing the system to boot up.

    It’s worth noting that the same logic applies to Windows, although Red Hat’s diagram showcases an example using individual components instead of a generic overview.

    Keep in mind that if a user disables Secure Boot in their motherboard settings, none of these verification processes occur. Disabling Secure Boot is the recommended option when testing Linux distributions or other operating systems on UEFI, as it eliminates limitations while still supporting the necessary firmware interface.

    Understanding how Secure Boot operates empowers users to make informed decisions about their system’s security and compatibility. Whether to enable or disable Secure Boot can be tailored to specific needs, striking a balance between convenience and protection.

    Exploring Secure Boot Settings on Your Motherboard

    To access the Secure Boot settings on your motherboard, follow these general steps:

    1. Enter BIOS/UEFI: Restart your computer and access the BIOS or UEFI firmware. This is usually done by pressing a specific key during the boot process, such as F2, Del, or Esc. The key to enter BIOS may vary depending on your motherboard manufacturer.
    2. Locate Security/Startup Section: Once inside the BIOS/UEFI interface, navigate to the Security or Startup section. The exact location may vary depending on your motherboard’s firmware.
    3. Find Secure Boot Configuration: Look for Secure Boot configuration options within the Security or Startup section. These settings control the behavior of Secure Boot.
    4. Adjust Secure Boot Settings: Depending on your motherboard, you may encounter different configurations. Some systems offer generic options, while others have settings tailored specifically for Windows, as it remains the most widely used operating system.

    Disabling Secure Boot is typically straightforward on desktop computers, allowing users to easily toggle the setting without additional hurdles. However, with laptops, such as Acer models, disabling the Security feature might require setting a password to access the BIOS.

    Remember, the steps outlined here provide a general guide, and the specific procedure can vary depending on your motherboard manufacturer and firmware version. It’s recommended to consult your motherboard’s manual or visit the manufacturer’s website for detailed instructions tailored to your specific hardware.

    By understanding how to access Secure Boot settings, you can manage this security feature according to your needs and ensure compatibility with different operating systems or software configurations.

    Final Thoughts

    In conclusion, Secure Boot has its advantages and disadvantages, as evident from the arguments presented by both its supporters and detractors. While it offers security benefits on paper, real-world implementation has exposed certain vulnerabilities, lending credibility to the concerns raised by critics.

    Regardless of opinions, Secure Boot has become a permanent fixture in the tech landscape. Its presence is expanding into areas like the Internet of Things (IoT), where Linux-based systems, particularly Ubuntu, dominate. However, the ability to disable Secure Boot remains crucial, as not all distributions fully support the feature. Additionally, less mainstream systems like those derived from Illumos may face compatibility challenges.

    The future of Secure Boot lies in striking a balance between security and compatibility, ensuring that users have the option to enable or disable it based on their specific needs and the level of support provided by their chosen operating systems or distributions. As the technology continues to evolve, it is essential to stay informed about its implementation and impact on different platforms to make informed decisions regarding its usage.

  • The AMD Radeon RX 7900 XT will have a VRAM memory bandwidth of 20 Gbps

    The AMD Radeon RX 7900 XT will have a VRAM memory bandwidth of 20 Gbps

    We are gradually discovering more about the following RDNA3-based AMD graphics cards; this time, it matches the most potent model, the AMD Radeon RX 7900 XT. The NAVI 31 GPU, which we know will reach 92 TFLOPS at FP32 in its complete configuration, will be the foundation of AMD’s upcoming generation card. Additionally, we know that the AMD Radeon RX 7900 XT will have 20 Gbps RAM, faster than the 18 Gbps first reported by Greymon55 on Twitter.

    https://twitter.com/greymon55/status/1553300314987540480

    However, things don’t just end here; with a future AMD Radeon RX 7950 XT, this speed might even be raised to 24 Gbps. Although it’s too soon to say for sure, the leaker asserts that this update and a faster clock speed are highly likely for the upcoming model.

    He also stated that the release date for AMD’s upcoming flagship would be November. We already knew this, but it has now been confirmed once more. A few weeks after the September 15 launch of the new AMD Ryzen 7000 processors, the new AMD Radeon RX 7000 Series with RDNA3 graphics are anticipated.

  • Future graphics cards from AMD and NVIDIA start to surface with their names

    Future graphics cards from AMD and NVIDIA start to surface with their names

    Although their debuts are still some time off, AMD and NVIDIA have already started registering the names of their following graphics cards with the EEC (Eurasian Economic Commission), a procedure that almost all businesses follow

    There are no surprises for the upcoming generation in any of the two lists, with AMD offering models in steps of 100, with standard and XT options, from the AMD Radeon RX 7500 to the RX 7900. Additionally, NVIDIA has registered the NVIDIA GeForce RTX 40 Series, which ranges from the RTX 4050 to the RTX 4090 Ti, using the same procedure as with the GeForce 30 series cards.

    Interestingly, Super variants of specific existing models have also been filed on the NVIDIA side, suggesting that the company may attempt to slot several graphics cards with existing GPUs between the present range and the upcoming GeForce cards.

    This decision may have to do with the chance that there may be too many Ampere GPUs on hand for NVIDIA to believe the GeForce RTX 40 Series will sell successfully.

  • Intel updates its Media Q2 drivers to add support for Intel Raptor Lake S and P

    Intel updates its Media Q2 drivers to add support for Intel Raptor Lake S and P

    The Q2 Media Drivers from Intel have been upgraded to work with the following Raptor Lake processors. The Intel Raptor Lake-P laptop and desktop Raptor Lake now support this Intel API for video acceleration, encoding, and decoding for GPUs built into Intel CPUs.

    Version 22.4.4 of the Media Drivers Q2 library has been released, and one of its new features is that it is now compatible with processors from the upcoming generation. It will work with the Intel Raptor Lake-S and the Intel Raptor Lake-P, low-power laptop processors from the upcoming generation about which we haven’t heard anything.

    Higher numbers of mobile cores, up to 22 for the Intel Raptor Lake-P and 24 for the Intel Raptor Lake-HX, are anticipated in this new generation of mobile processors.

    On the other hand, the Intel Raptor Lake-S series is anticipated to go on sale in October. Intel is already making preparations to make its software compatible with this upcoming generation. However, it’s also predicted that laptop CPUs won’t be readily available until 2023, most likely in January, at CES. Nevertheless, Intel added this Intel Raptor Lake-P platform to the Media Drivers Q2 this time.

  • How to Pick the Best Processor for Your Computer

    How to Pick the Best Processor for Your Computer

    A computer comprises various essential components that would be useless without them. The CPU is the PC’s brain, and it is the processor that will determine (primarily) our computer’s performance. As a result, it’s critical to pick a suitable processor for your PC.

    Fbhtechinfo will show you how to do so without squandering money or performance while also ensuring that you don’t run out of processing power. The CPU you choose for your computer is crucial since it determines how you can use it. There are several aspects to consider when choosing a CPU, which we shall go through in this tutorial.

    When purchasing a computer, you must select a processor based on the jobs you want to complete, including future tasks. You can buy a PC without understanding how to edit video, for example, but if you want to take a course, you should consider this so that you may use your PC to edit video in the future.

    Another element to consider is that it is long-lasting. If we want to acquire a PC that will last us several years, we must remember that technology is rapidly evolving. Purchasing an out-of-date CPU can be a costly mistake if we want to keep our machine for several years.

    It would help to consider whether you require powerful graphics or whether a GPU integrated into the processor would suffice. Not all CPUs come with graphics, and integrated processors are typically significantly less potent than even the most basic or entry-level standalone graphics cards.

    These and other characteristics will be visible in the processors that are split between two large manufacturers. Whether you’re looking for a desktop or laptop, AMD and Intel provide something for everyone (and any budget).

    Intel Vs AMD

    Intel and AMD are the two primary processor makers, and they jointly control the market for laptop and desktop CPUs. AMD and Intel both have a wide range of CPUs that will match your needs for your PC or pocket.

    Intel Core Processors 12th Generation

    Generation after generation has fought to provide the most advanced technology in the form of a CPU. Each has its quirks in terms of advanced nodes, instruction sets, and particular characteristics.

    @Geeknetics

    AMD used PGA type sockets (pins on the processor) in desktop PCs until the Ryzen 7000, which utilized LGA (pins on the board socket). Intel has long utilized the LGA-type socket in its desktop PC processors. A package known as BGA is typically used in laptops, which is integrated and soldered directly onto the motherboard.

    If you choose an AMD processor, keep in mind that most of these desktop CPUs, only those ending in G, do not have integrated graphics. On the other hand, Intel has a significant number of integrated GPU references, which are distinguished because they do not terminate in F.

    Processors from AMD’s Ryzen 5000 series

    @Geeknetics

    Both brands have good CPUs for what you need; you have to choose one based on your existing hardware or choose a platform if you’re buying entirely new hardware. AMD has always worked closely with Intel to build high-quality market-driven processors.

    Cache, Cores, and Architecture

    For their processors, each manufacturer utilizes a different architecture. Intel has chosen to apply an architecture widely seen in mobile CPUs, hybrid cores, in the 12th Gen. AMD, for example, has been employing Zen cores and chipsets for some time.

    AMD’s latest processors are based on the Zen architecture, announced in 2017. Zen is different from Bulldozer in that it was designed from the ground up to execute more instructions per cycle. FinFET technology is used to increase the energy efficiency of these CPUs. AMD has had significant success with this architecture, which has allowed them to offer improved performance in its processors.

    As a result of the Zen cores’ popularity, AMD has continued to improve the technology. Zen 2 was released after Zen, and it managed to boost the instructions per cycle by up to 15% while also doubling the L3 cache. It also increased Infinity Fabric’s floating-point performance and bandwidth.

    Then came Zen 3, which had faster clock speeds and more instructions per cycle than Zen 2. It has also been fundamentally revamped with more than 20 essential modifications that make it more efficient, which is a considerable development in the current Zen 3.

    Intel elected to divide its cores into its 12th Generation processors, and this tendency will continue in the future. Now, processors are divided into high-performance cores, which can handle the most demanding tasks, and high-efficiency cores, which can handle the simplest activities in the background while consuming less power. This combination has surprised more than one user using Intel’s twelfth generation processors.

    Intel appears to be sticking with the hybrid core design that we’ve seen so much in mobile phones, and that appears to be producing excellent results in future generations. With a more lavish core count and faster clock speeds, the 13th generation Raptor Lake will outperform the Alder Lake.

    Intel is working on chipset-based designs for future generations, with each chip made using the most efficient process to improve overall performance and efficiency. This is visible in the fourteenth generation Meteor Lake.

    A processor’s cache memory also plays a vital role, albeit this depends on the processor type. Cache memory is a form of memory that stores instructions for the processor to execute quickly.

    @Geeknetics

    It is separated into three layers, with the L1 Cache being the quickest and nearest to the processor; each core has its L1 Cache, where instructions that must be performed promptly are kept. The L2 Cache, the intermediate memory, usually has a larger capacity than the L1 but is a little slower and stores the instructions that will be executed in the processor shortly. Finally, the L3 Cache is larger than the others, and it is usually partitioned by core size.

    Choose a PC based on its Intended usage

    We may divide the purposes for using your PC (broadly speaking) into four categories; let’s look at which processor is best for each.

    Gaming

    If you’re going to commit your computer to a game, you should always use the latest generation’s highest-end processors. Furthermore, it would be advantageous to select a processor that enables overclocking so that, with enough cooling, you may achieve higher performance when needed. The AMD Ryzen 7 and 9 series and Intel’s Core i7 and i9 series are the most potent processors recommended for gaming. In some cases, the AMD Ryzen 7 or Intel Core-i7 offers a better result than the highest range. When it comes to running games, having an excellent L3 Cache usually provides an edge, even if it is usually tied.

    In these circumstances, it’s best to go for an unlocked processor, which enables overclocking to get even more performance. You’ll need a robust cooling system, and liquid cooling is an excellent option, whether it’s an all-in-one model or a custom one you can build to your specifications.

    3D rendering and Video editing

    These tasks also require a powerful processor, but the L3 Cache is not so crucial in these cases. However, you will have a large cache memory with a powerful processor from Intel or AMD, the Core i9 or the Ryzen 9. In this case, you can have a PC for multiple uses. In addition to rendering and editing videos with great ease, you will be able to play the latest games with the highest possible quality.

    Professional processors are an option if you require a powerful computer capable of doing complex tasks and have an unrestricted budget. Intel’s X-series CPUs and AMD’s Threadripper processors have more cores to achieve better performance in this task.

    Office Automation

    You don’t need the fastest CPU or the best performance for office automation. Because office chores do not necessitate massive calculations by the processor, you can choose from the entry-level options. It would also be good to get one with integrated graphics, as you would not want a lot of graphics power, and this will save you money on buying a separate card, which will raise the price of your computer. Intel Core i3 processors are an excellent choice. AMD does have CPUs (the ones ending in G) that have integrated graphics and offer excellent performance, despite the lack of variety.

    Multimedia

    Although a powerful processor is not required, a mid-range processor will suffice. These processors are priced appropriately, and you can also add an independent graphics card that complements the most popular video encoding and decoding that you will have to use to watch movies and series. Intel has the Core i5, and AMD has the Ryzen 5. Whether you want to play games or not, you can save money by choosing an AMD CPU with integrated graphics (the ones that finish in G) or an Intel Core i5 processor with a GPU.

    The best processor for your computer

    Following these procedures, you will be able to select a CPU for your PC appropriate for the use you intend to give it. If you’re solely planning to use an office suite, an Intel Core i9 or AMD Ryzen 9 isn’t worth it. If you want to create a gaming PC and employ an entry-level processor with integrated graphics, you won’t have many options. In the latter instance, you may be able to play a significant number of games, but your image quality and resolution will be severely limited.

    Adapt to your budget and needs; choosing a cheaper processor than we want can result in long-term issues.

  • The AMD Radeon RX 6950 XT, RX 6750 XT, and RX 6650 XT graphics cards are now available

    The AMD Radeon RX 6950 XT, RX 6750 XT, and RX 6650 XT graphics cards are now available

    Today, AMD has revealed the AMD Radeon RX 6×50 XT, the expected refresh of its 6000 series GPUs. The AMD Radeon RX 6650 XT and RX 6750 XT fill performance gaps in the mid and high ranges, respectively, while the powerful AMD RX 6950 XT claims the performance crown.

    AMD Radeon RX 6X50 XT technical specifications

    When paired with 128 GB of Infinity Cache memory, the most potent model offers 16 GB of GDDR6 memory with a 256-bit interface and a bandwidth of 1,739 GB/s. The RX 6750 XT has 12 GB of 129-bit GDDR6 with 1,236 GB/s of bandwidth, whereas the RX 6650 XT has 8 GB of 128-bit GDDR6 with 469 GB/s of bandwidth.

    The TBP (total power consumption, which includes the card, memory, and other components) ranges from 180W for the AMD Radeon 6650 XT to 335W for the Radeon RX 6950 XT. We have the RX 6750 XT with 250 W in the middle.

    AMD Radeon RX 6X50 XT Architecture CUs with Raytracing accelerators Stream Processors  MHz GAME MHz Boost Memory Memory Bandwidth (With Infinity Cache) TBP Launch Price
    AMD Radeon RX 6950 XT RDNA2 80 5,120 2,100MHz 2,310MHz 16GB GDDR6 256-bit 1,739GB/s 335W 1,099 dollars (1,269 euros)
    AMD Radeon RX 6750 XT RDNA2 40 2,560 2,495MHz 2,600MHz 12GB GDDR6 192-bit 1,236GB/s 250W $549 (€634.90)
    AMD Radeon RX 6650 XT RDNA2 32 2,048 2,410MHz 2.635MHz 8GB GDDR6 128-bit 469GB/s 180W $399 (€459.90)

    Performance of AMD Radeon RX 6950/6750/6650 XT

    AMD claims that the AMD Radeon RX 65950 XT and its 335W give a higher performance/consumption and performance/price ratio than NVIDIA’s RTX 3090 and RTX 3090 Ti, which have TGPs of 250 and 450 W, respectively, and prices of $1,799 and 1,999 USD.

    When Resizable Bar, Nvidia Image Scaling, and Radeon Super Resolution scaling technologies are used, this graphics card outperforms the RTX 3090.

    THE INTERMEDIATE MODEL, the AMD Radeon RX 6750XT, claims better performance than the NVIDIA GeForce RTX 3070.

    The AMD Radeon RX 6650 XT, on the other hand, will compete with the RTX 3060 on pricing and performance.

    The AMD Radeon RX 6×50 XT will be available in stores today and will work alongside the current 6000 graphics. Thus no current models like the RX 6900 XT will be replaced.

  • Why are SSDs with larger capacities faster?

    Why are SSDs with larger capacities faster?

    If you’ve ever purchased an SSD, you know that larger capacities come at a more fantastic price, which, depending on your budget, may lead you to choose smaller capacity and lower pricing. But did you realize that larger solid-state drives offer more significant benefits?

    Traditional hard disks, or HDDs, include a physical reading and writing mechanism and a reading head that searches the cylinder for the sector we want to read or write in the matching block.

    Solid-state drives (SSDs) use a high-speed controller to access the device’s NAND modules in parallel, storing data in NAND flash memory. So, in an SSD, we discover that adding more NAND modules allows the controller to access more memory modules at once, allowing for faster read speeds.

    We can’t add more modules to a low-capacity SSD

    Because it costs almost the same to generate 64GB memory as 128GB memory, companies that develop NAND memory chips would make less money by creating smaller memory modules. However, to reach the 250GB SSD, they would have to build a more significant 64GB module, implying higher production costs.

    For this reason, we discovered disparities in reading speeds between lower and greater capacity SSDs. At the same time, it may appear reasonable; producing them with lesser capacities and smaller modules would result in a financial loss for corporations.

    It’s not difficult to come across quicker low-capacity SSDs.

    We found improved read and write performance in smaller solid-state devices thanks to controller and NAND manufacturing techniques, such as 2D or 3D, making the difference less noticeable.

    SSDs with larger capacities provide us with greater advantages.

    The fact that an SSD contains more memory modules implies that the workload of the sectors is lessened, which increases its durability because it will take longer to reach the SSD’s sector’s limit of usage – unless you use the SSD to its capacity limit. As a result, a more significant capacity SSD provides us with superior performance, capacity, and longevity.

     

  • Version 22.3.1 of AMD Radeon Software would be modifying CPU settings without permission

    Version 22.3.1 of AMD Radeon Software would be modifying CPU settings without permission

    According to WCCFTech, version 22.3.1 of the AMD graphics drivers –AMD Radeon Software— appears to be changing the system’s CPU settings without the user’s explicit permission, causing issues, particularly for those who have a specific configuration to have the equipment fully tuned to their liking.

    [ads1]

    The current ideas revolve around the Ryzen Master module included with the graphics controller, which has access to adjust the CPU parameters and does so without being requested in this version. AMD has not commented on the topic.

    This happens when an AMD CPU and GPU are installed, but not if one of the components is from another manufacturer. It has to do with Precision Boost Overclocking being enabled by activating a graphics profile when this option is enabled in the system.

    For the time being, there appears to be a workaround, and while it’s a time-consuming operation, creating a new profile for our graphics card after installing the driver should keep the system from performing like this. Another solution is to utilize Radeon Software Slimmer, a third-party application that removes the Ryzen Master SDK module and prevents CPU-related alterations.

  • Two days before the event, Intel releases a fresh teaser for the Intel Arc

    Two days before the event, Intel releases a fresh teaser for the Intel Arc

    Intel’s graphics division has posted a new teaser video on Twitter two days after unveiling the new Intel Arc dedicated graphics cards. We expect to see these new Intel Arc chips in a short video using a laptop.

    The name and model aren’t visible in the video, but it’s likely to be one of the leaked models featuring one of the new Intel Arc A350M or A370M processors unveiled on the 30th.

    One of the answers also revealed that only Intel Arc graphics cards would be available for laptops, which we already knew. It’s unclear which models will be showcased at this event, given that laptops with Intel Arc will be available for purchase starting on the day of the event. The Acer Swift X with an Intel Arc A370M and the Samsung Galaxy Book2 Pro have been observed, although the included model has not been specified.

    These models relate to the Intel Arc based on the DG2-128, the Intel Xe-minor HPG’s configuration, with no information on the higher variants based on the DG2-512 Intel Arc A500 and A700. The Dell Precision 5470 will also incorporate an Intel Arc for mobile workstations, the Intel Arc A30M Pro, announced today. We’ve never heard of this model before, so don’t hold your breath for an announcement of the cards for mobile workstations on the 30th.

    We’ll clear up any lingering questions at the Intel presentation in a few days, where we’ll hear these and other specifics about the new Intel Arc graphics cards.

  • MSI GeForce RTX 3090 Ti SUPRIM X with 480W TDP and 16-pin connector has been leaked online

    MSI GeForce RTX 3090 Ti SUPRIM X with 480W TDP and 16-pin connector has been leaked online

    According to Videocardz, the MSI version of the NVIDIA GeForce RTX 3090Ti has been leaked, a version of the card with a TDP higher than the 450W of the reference version, in this case, 480W, which is within what the 12+4 pin connector – next to the PCI Express slot – can provide, as we mentioned yesterday (525W).

    It fully implements the NVIDIA GA102 chip, which has 10752 CUDA cores and a new memory subsystem with 2GB GDDR6X modules, just like the rest of the cards.

    In Gaming and Silent modes, the card’s frequencies will be 1950 MHz, while in Extreme Performance mode, they will be 1965 MHz. The actual frequency of the memory is unknown, but they will have a speed of 21Gbps.

    Its physical design will not leave anyone uninterested, and with a weight of 2.1Kg and a thickness of 3.5 slots, this cooling system should be able to keep the 480W TDP of this graphics card at bay. However, because this will be a difficult assignment, we will have to wait for the outcomes.

  • AMD intends to release a Ryzen 7000 series processor with a 170 W TDP.

    AMD intends to release a Ryzen 7000 series processor with a 170 W TDP.

    AMD has announced new 4000 and 5000 series CPUs to round out its current Zen3 and Zen2 processor portfolio, but the new AMD Ryzen 7000 series processors are expected to be released in the third quarter of this year.

    [ads1]

    These CPUs were revealed at CES 2022 and will be the first AMD processors in the United States to use an LGA socket instead of a PGA socket. AMD will offer a reference with 16 cores and a TDP of 170W, according to a tweet from Greymon55.

    https://twitter.com/greymon55/status/1506910277375180803

    With 16 cores and 32 threads, this arrangement is identical to the existing AMD Ryzen 9 5950x. However, this new AMD with socket AM5 will be recommended for usage with liquid cooling. AMD does not appear to be updating its core configuration in this generation; thus, we won’t see CPUs with 24 or 32 cores until at least 2023.

    However, because of its hybrid core arrangement, the new Intel Raptor Lake that will be introduced with these Ryzen 7000 Series will have an increase in cores up to 24 cores and 32 execution threads.

  • The Intel database shows a new DG2MB GPU.

    The Intel database shows a new DG2MB GPU.

    Now that Intel’s new discrete graphics for laptops will be released on March 30, Igor’s Lab has discovered an odd entry in the Intel Arc database that includes a GPU named DG2MB.

    It’s unclear what this new reference refers to. It appears to be aboard, with a specialized Intel Arc graphics card and an Intel processor, which we presume will be for laptops.

    This item categorizes this DG2MB as a graphics-related product. However, a speed of 4,000 MHz occurs, which is unheard of for a laptop graphics card. In this instance, we can presume that the rate is incorrect or correlates to the CPU speed.

    In addition, there is very high consumption for a laptop graphics card. TDP is stated to be 200 W. Although everything leads to a combined CPU and GPU utilization, it could be an error. As a result, this new entry could be a good fit for light laptops with specialized Intel Arc graphics and Intel processors.