Liberty Posted April 3, 2018 Share Posted April 3, 2018 Speaking of Intel, if you want to have a look at the new lineup: https://www.anandtech.com/show/12607/intel-expands-8th-gen-core-core-i9-on-mobile-iris-plus-desktop-chipsets-and-vpro Link to comment Share on other sites More sharing options...
oddballstocks Posted April 3, 2018 Share Posted April 3, 2018 Well that's not true. MacBook airs and pros use i7s which is a mid range xeon. Some use i5s - lower range xeons and probably not worth talking about here. The fact is that one of the things that made macs so successful is that they've overpowered their systems. That's one of the main reasons why they were so stable and popular. They idea that x and y chip would be good enough for what the user needs is a radical departure from this strategy. I don't doubt that Apple can design some chips to put in their macs. What eludes me is what they have to gain from it. I am not going to argue this at length... but an i3, i5, i7 are NOT Xeons. Xeons frequently use different motherboards, frequently have different LGA architecture, Xeons don't have integrated graphics, Xeons can use ECC memory...Xeons can be used in multiprocessor configurations (ever see a dual processor i7 system?), and on and on. Xeons are typically a LOT more expensive than i3's, i5's, i7's. The highest end Xeon is about $12,500....contrast that with the highest end i7, which is about $800. Xeons are used primarily in SERVER configurations...but sometimes are used in HIGH end workstations. There are some laptops that use Xeons (Dell Precision series), but I have yet to see any Apple laptop using a Xeon processor. PC's Xeon laptops are pretty rare & expensive. How do I know this? It is my primary business. I use 4 different Macs and 1 PC. I sell HUNDREDS of computers/servers every year. Over the course of my career, I've sold TENS of thousands of computers, and hundreds of servers. This is correct. I have a small server farm and they're all Xeons, totally different from consumer chips. I don't know of any laptops with ECC, but only an idiot would boot up a server without ECC. If you're really worried about data corruption they support memory mirroring as well, sort of like RAID 1 for memory. There are a number of crazy virtualization features built into Xeons too. Beyond that Xeons are configurable. I can go into the BIOS and set the speed, set how it runs, set limits on how things run, turn on/off cores etc. One of Intel's biggest mistakes was using the same numbers for vastly different things. There are i7's from years ago and i7's from today, they have the same name, but different performance profiles. The latest Mac has an i7-860 in it. The passmark score is 5025 (this is a relative number). I have an 'old' server running an E5-2620 v0, the passmark is 7935. This is a legacy processor that for all intensive purposes is dead to Intel, it's too slow (they retail for $200 on Amazon). I have some machines with processors with benchmarks in the 20k range on the same passmark score, they're a version behind the latest Intel, I purchased last year's model to save money. This is 4x as fast as the fastest Mac, and they're considered outdated as well. A Xeon Platinum 8173, the latest and greatest is hitting 28k on passmark, 5.6x faster than the Mac. I'm noting this to show the gulf between consumer and enterprise is quite large, and Xeon is the enterprise level stuff. Apple has made incredible strides with ARM, but it was because they started at a low base. Things will start leveling out, mainly because they're bumping up against the limits of thermal efficiency with passive cooling in a sealed case. This is an interesting and seemingly objective article about ARM at the server level: https://blog.cloudflare.com/arm-takes-wing/ Where ARM wins is by having a ton of cores available, this is the route Apple is taking as well. An issue with this is if software isn't massively parallel then having extra cores doesn't matter much. I'd be happy to see some competition in the server space for Intel. As Cloudflare points out they have 98% marketshare. Their prices are high, and stay high. Link to comment Share on other sites More sharing options...
rkbabang Posted April 3, 2018 Share Posted April 3, 2018 Well that's not true. MacBook airs and pros use i7s which is a mid range xeon. Some use i5s - lower range xeons and probably not worth talking about here. The fact is that one of the things that made macs so successful is that they've overpowered their systems. That's one of the main reasons why they were so stable and popular. They idea that x and y chip would be good enough for what the user needs is a radical departure from this strategy. I don't doubt that Apple can design some chips to put in their macs. What eludes me is what they have to gain from it. I am not going to argue this at length... but an i3, i5, i7 are NOT Xeons. Xeons frequently use different motherboards, frequently have different LGA architecture, Xeons don't have integrated graphics, Xeons can use ECC memory...Xeons can be used in multiprocessor configurations (ever see a dual processor i7 system?), and on and on. Xeons are typically a LOT more expensive than i3's, i5's, i7's. The highest end Xeon is about $12,500....contrast that with the highest end i7, which is about $800. Xeons are used primarily in SERVER configurations...but sometimes are used in HIGH end workstations. There are some laptops that use Xeons (Dell Precision series), but I have yet to see any Apple laptop using a Xeon processor. PC's Xeon laptops are pretty rare & expensive. How do I know this? It is my primary business. I use 4 different Macs and 1 PC. I sell HUNDREDS of computers/servers every year. Over the course of my career, I've sold TENS of thousands of computers, and hundreds of servers. This is correct. I have a small server farm and they're all Xeons, totally different from consumer chips. I don't know of any laptops with ECC, but only an idiot would boot up a server without ECC. If you're really worried about data corruption they support memory mirroring as well, sort of like RAID 1 for memory. There are a number of crazy virtualization features built into Xeons too. Beyond that Xeons are configurable. I can go into the BIOS and set the speed, set how it runs, set limits on how things run, turn on/off cores etc. One of Intel's biggest mistakes was using the same numbers for vastly different things. There are i7's from years ago and i7's from today, they have the same name, but different performance profiles. The latest Mac has an i7-860 in it. The passmark score is 5025 (this is a relative number). I have an 'old' server running an E5-2620 v0, the passmark is 7935. This is a legacy processor that for all intensive purposes is dead to Intel, it's too slow (they retail for $200 on Amazon). I have some machines with processors with benchmarks in the 20k range on the same passmark score, they're a version behind the latest Intel, I purchased last year's model to save money. This is 4x as fast as the fastest Mac, and they're considered outdated as well. A Xeon Platinum 8173, the latest and greatest is hitting 28k on passmark, 5.6x faster than the Mac. I'm noting this to show the gulf between consumer and enterprise is quite large, and Xeon is the enterprise level stuff. Apple has made incredible strides with ARM, but it was because they started at a low base. Things will start leveling out, mainly because they're bumping up against the limits of thermal efficiency with passive cooling in a sealed case. This is an interesting and seemingly objective article about ARM at the server level: https://blog.cloudflare.com/arm-takes-wing/ Where ARM wins is by having a ton of cores available, this is the route Apple is taking as well. An issue with this is if software isn't massively parallel then having extra cores doesn't matter much. I'd be happy to see some competition in the server space for Intel. As Cloudflare points out they have 98% marketshare. Their prices are high, and stay high. I think you guys are missing rb's main point. No i7s are not xeons, but Apple's ARMs are not i7s. The best Apple chip can keep up to a mid range i5. If Apple switches its desktops and laptops to ARM you will no longer have the option of an i7. Apple's ARM chips in 2020 will not be the equivalent of Intel's i7s in 2020, never mind their 2020 i9s or xeons. Link to comment Share on other sites More sharing options...
oddballstocks Posted April 3, 2018 Share Posted April 3, 2018 One other thing that will be interesting on this is how Apple will do the conversion. When they went from PPC to x86 they had an instruction emulator layer. They could do this because x86 was a lot faster, and they had the rights to the PPC instruction set. This time around it's different. The ARM chips are slower, and they don't have rights to x86_64. From what I've read x86 is tightly held, both Intel and AMD have cross-licensing agreements for the IP. I believe there might be one or two other players. But they all have these mutual destruction agreements. If Apple purchased AMD then AMD would lose all x86 rights, same with an Intel sale. They've all guaranteed that they will continue to exist because if anything changes they lose access to the crown jewels. If that weren't the case the obvious play would be for Apple to purchase AMD. My guess is they just decide that old apps aren't supported on the new system and anyone wanting to use an old app will have to purchase an older Mac. Or even more likely the ARM chips will be in the lower end Macs, the ones that will be unified iPad/iPhone/Mac systems all sharing the same apps. If you want to be a 'power' user then you purchase the more expensive ones. Link to comment Share on other sites More sharing options...
oddballstocks Posted April 3, 2018 Share Posted April 3, 2018 Apple would love to have a laptop with 20+ hours of battery life that its Intel-using competitors can't match along with FaceID to unlock and a secure enclave to store all biometrics data, for example. And since they control the OS, they can more tightly integrate the two (ie. have specialized cores that get used by the OS for very specific functions maybe -- perhaps an hardware X86 emulator optimized for the transition period so that old software can run on the new ARM OS). I know you're big in the Apple camp, but I have a Surface Book with 16hrs of battery life, and it unlocks with my face as well. The newer Surface Books have even longer battery life. Plus the screen detaches to become a tablet. These things exist now, but Apple didn't create them. By tweaking the battery settings I've been able to get more than 16 hours out while traveling, which I consider pretty impressive. Link to comment Share on other sites More sharing options...
rb Posted April 3, 2018 Share Posted April 3, 2018 My guess is they just decide that old apps aren't supported on the new system and anyone wanting to use an old app will have to purchase an older Mac. Or even more likely the ARM chips will be in the lower end Macs, the ones that will be unified iPad/iPhone/Mac systems all sharing the same apps. If you want to be a 'power' user then you purchase the more expensive ones. If they do this I think it'll be a disaster. The app developers will desupport their older apps and basically everyone with an older mac will be screwed. The thing is I wouldn't put this past apple. Link to comment Share on other sites More sharing options...
rb Posted April 3, 2018 Share Posted April 3, 2018 I think you guys are missing rb's main point. No i7s are not xeons, but Apple's ARMs are not i7s. The best Apple chip can keep up to a mid range i5. If Apple switches its desktops and laptops to ARM you will no longer have the option of an i7. Apple's ARM chips in 2020 will not be the equivalent of Intel's i7s in 2020, never mind their 2020 i9s or xeons. Yes that's basically where I was getting to. But since we've started digging into this let me go in a bit deeper because there's been some fallacies written. Yes Xeons and Is are meant to do different things. Is are meant as consumer chips and Xeons as servers. There are also a million types of Xeons ranging from meh chip to starship with a wide range of prices. Just because the top xeon sells for 12,000 doesn't mean that xeons are expensive. There are cheap xeons as well. As oddball pointed out xeons are super configurable as well. On weather Is are xeons or not. That's both true and false. High end i5s and I think all i7s are actually xeons by constructions but they have different feature sets enabled. For example the i7 has the ecc controller on the chip but is disabled. The overclocking features are disabled on the xeons but are enabled on certain i5 and i7 models, etc. In practice these do make them different chips. But the difference is more in segmenting their use rather than create differences in computing power. As far as I know Intel does this to reduce manufacturing costs. Now as pointed out here the classical thinking about Is and xeons was xeons for servers, xeons more expensive than Is, xeons give you ecc and Is don't, Is have graphics and xeons don't. That hasn't been true for a while. 1. There are plenty of xeons that have graphics. 2. Traditionally xeons without graphics were CHEAPER than their i5 and i7 equivalents by about $50. So if you had a video card you could get the xeon equivalent to the i5 or i7 and get all the xeony stuff and save some money too. Neat. Lately Intel has been tinkering with the prices so I'm not sure this is still true for the current generation. 3. For a while now there have been i3/i5/i7 models that do support ECC RAM. Link to comment Share on other sites More sharing options...
Liberty Posted May 22, 2018 Share Posted May 22, 2018 https://arstechnica.com/gadgets/2018/05/new-speculative-execution-vulnerability-strikes-amd-arm-and-intel/ Link to comment Share on other sites More sharing options...
Sunrider Posted May 23, 2018 Share Posted May 23, 2018 Hi - not sure if that’s quite right. If you think about how most modern software runs - it’s not machine code ... but bytecode that runs in an execution environment (kinda like a virtual machine - e.g. C sharp, Java, Swift) ... so I would think all you need to do is get the execution environment (and OS) to run on your new processor. The source code of the programme gets compiled into VM bytecode as before and the actual programme doesn’t know that the process underneath is different. (Of course this doesn’t account for edge case/high performance / direct machine code type programmes. One other thing that will be interesting on this is how Apple will do the conversion. When they went from PPC to x86 they had an instruction emulator layer. They could do this because x86 was a lot faster, and they had the rights to the PPC instruction set. This time around it's different. The ARM chips are slower, and they don't have rights to x86_64. From what I've read x86 is tightly held, both Intel and AMD have cross-licensing agreements for the IP. I believe there might be one or two other players. But they all have these mutual destruction agreements. If Apple purchased AMD then AMD would lose all x86 rights, same with an Intel sale. They've all guaranteed that they will continue to exist because if anything changes they lose access to the crown jewels. If that weren't the case the obvious play would be for Apple to purchase AMD. My guess is they just decide that old apps aren't supported on the new system and anyone wanting to use an old app will have to purchase an older Mac. Or even more likely the ARM chips will be in the lower end Macs, the ones that will be unified iPad/iPhone/Mac systems all sharing the same apps. If you want to be a 'power' user then you purchase the more expensive ones. Link to comment Share on other sites More sharing options...
Liberty Posted June 26, 2018 Share Posted June 26, 2018 Good read: https://stratechery.com/2018/intel-and-the-danger-of-integration/ Link to comment Share on other sites More sharing options...
mwtorock Posted October 3, 2018 Share Posted October 3, 2018 https://www.bloomberg.com/news/articles/2018-10-02/intel-gain-brings-amd-pain-as-report-touts-10-nanometer-strides The bull bear fighting ground - whether intel is going to lose market share due to 10nm yield issue. Link to comment Share on other sites More sharing options...
oddballstocks Posted October 3, 2018 Share Posted October 3, 2018 Hi - not sure if that’s quite right. If you think about how most modern software runs - it’s not machine code ... but bytecode that runs in an execution environment (kinda like a virtual machine - e.g. C sharp, Java, Swift) ... so I would think all you need to do is get the execution environment (and OS) to run on your new processor. The source code of the programme gets compiled into VM bytecode as before and the actual programme doesn’t know that the process underneath is different. (Of course this doesn’t account for edge case/high performance / direct machine code type programmes. One other thing that will be interesting on this is how Apple will do the conversion. When they went from PPC to x86 they had an instruction emulator layer. They could do this because x86 was a lot faster, and they had the rights to the PPC instruction set. This time around it's different. The ARM chips are slower, and they don't have rights to x86_64. From what I've read x86 is tightly held, both Intel and AMD have cross-licensing agreements for the IP. I believe there might be one or two other players. But they all have these mutual destruction agreements. If Apple purchased AMD then AMD would lose all x86 rights, same with an Intel sale. They've all guaranteed that they will continue to exist because if anything changes they lose access to the crown jewels. If that weren't the case the obvious play would be for Apple to purchase AMD. My guess is they just decide that old apps aren't supported on the new system and anyone wanting to use an old app will have to purchase an older Mac. Or even more likely the ARM chips will be in the lower end Macs, the ones that will be unified iPad/iPhone/Mac systems all sharing the same apps. If you want to be a 'power' user then you purchase the more expensive ones. Seems the issue with PowerPC to x86 was PowerPC was big endian and x86 little endian. Link to comment Share on other sites More sharing options...
Liberty Posted October 5, 2018 Share Posted October 5, 2018 I don't think it's clear that we know that ARM would be slower, since we haven't seen what an ARM chip designed by Apple for laptops and desktops would be like. Their chips in phones and tablets are already competitive with some laptops despite the much lower power draw and smaller thermal headroom. I'd imagine that if they designed a chip to use a lot more power, they could get something pretty competitive. On a javascript benchmark that aims to be pretty real-world, the iPhone XS came on top of an iMac Pro: http://macdailynews.com/2018/09/23/apples-iphone-xs-is-faster-than-an-imac-pro-on-the-speedometer-2-0-javascript-benchmark/ Granted, this might be because the A12 has a lot of L1 cache, but still, I'd like to see what they can do with a desktop/laptop class chip designed in-house. Recent review of the A12: https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets Overall the new A12 Vortex cores and the architectural improvements on the SoC’s memory subsystem give Apple’s new piece of silicon a much higher performance advantage than Apple’s marketing materials promote. The contrast to the best Android SoCs have to offer is extremely stark – both in terms of performance as well as in power efficiency. Apple’s SoCs have better energy efficiency than all recent Android SoCs while having a nearly 2x performance advantage. I wouldn’t be surprised that if we were to normalise for energy used, Apple would have a 3x performance efficiency lead.[...] Apple’s marketing department was really underselling the improvements here by just quoting 15% - a lot of workloads will be seeing performance improvements I estimate to be around 40%, with even greater improvements in some corner-cases. Apple’s CPU have gotten so performant now, that we’re just margins off the best desktop CPUs Link to comment Share on other sites More sharing options...
oddballstocks Posted October 5, 2018 Share Posted October 5, 2018 I don't think it's clear that we know that ARM would be slower, since we haven't seen what an ARM chip designed by Apple for laptops and desktops would be like. Their chips in phones and tablets are already competitive with some laptops despite the much lower power draw and smaller thermal headroom. I'd imagine that if they designed a chip to use a lot more power, they could get something pretty competitive. On a javascript benchmark that aims to be pretty real-world, the iPhone XS came on top of an iMac Pro: http://macdailynews.com/2018/09/23/apples-iphone-xs-is-faster-than-an-imac-pro-on-the-speedometer-2-0-javascript-benchmark/ Granted, this might be because the A12 has a lot of L1 cache, but still, I'd like to see what they can do with a desktop/laptop class chip designed in-house. Recent review of the A12: https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets Overall the new A12 Vortex cores and the architectural improvements on the SoC’s memory subsystem give Apple’s new piece of silicon a much higher performance advantage than Apple’s marketing materials promote. The contrast to the best Android SoCs have to offer is extremely stark – both in terms of performance as well as in power efficiency. Apple’s SoCs have better energy efficiency than all recent Android SoCs while having a nearly 2x performance advantage. I wouldn’t be surprised that if we were to normalise for energy used, Apple would have a 3x performance efficiency lead. It's a totally different way of doing things. Per cycle ARM is a lot worse, but you can pack a lot of cores together and create a dense chip. It depends on workloads. Loading a single threaded app? Intel will crush it. Serving 100,000 website requests, ARM might kill. Look at JS, when loading a page you're executing a ton of simultaneous scripts. If you want to see true ARM real-world performance look at Cavium servers. They can get similar or better performance by packing the cores. https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/ Note, it's an LGA socket, with a lot of thermal cooling. Where this crushes is additional PCI lanes and more memory bandwidth. This obviously doesn't matter for laptops or phones, but it's worth pointing out. Where these things really add value is as hypervisors. You can sell off the cores and threads as individual VM's. The article goes in depth into the toolchain for the platform. It's matured. I used to mess around with alternative architectures. I had BSD running sparc, macppc, hp's alpha. I messed with it the other way and got Solaris on x86, Darwin on x86 etc. What I found in all cases was the tooling was severly lacking. Yes, you could boot linux, or BSD, but you had to recompile most tools, and some just crashed due to architecture errors. I would try to muck with the C a little, but it's a chore. What ARM needs is a robust ecosystem. There were rumors Windows might release Server on ARM, that would be significant. Link to comment Share on other sites More sharing options...
Liberty Posted October 5, 2018 Share Posted October 5, 2018 What you say is true for the mobile chips. What I'm saying: We haven't seen an ARM chip designed by Apple for laptop/desktops. There's nothing that says that ARM pipelines can't be widened a lot, L2 and L3 caches increased, clock speeds further ramped up, memory buses widened, etc.. They could get to do a lot more per cycle if the die space budget and power budget weren't so constrained by being designed for battery-powered mobile devices. What I'm saying is that even under those huge constraints the Apple-designed SoCs are doing extremely well. So under different contraints, I think they could very well be competitive with x86. But I agree that this would be a difficult transition to me. But it would be easier for Apple than anyone since they control a lot of the stack (hardware, software, app stores, IDEs/SDKs, etc), and because they're one of the only ones with the experience of having gone through one of those transitions recently. Link to comment Share on other sites More sharing options...
oddballstocks Posted October 5, 2018 Share Posted October 5, 2018 What I posted was pretty much the high end of ARM. Cavium is really the top when it comes to ARM. And it's for server not mobile or desktop. So it doesn't have any constraints, power, space, nothing. Check out the STH link. The killer is at the bottom. The ARM had great numbers but used 800W of power to do it. This is compared to a few hundred watts for Intel or AMD. And these chips can run with huge heatsinks and a bank of fans on the things in air conditioned rooms. I believe the issue is power per watt. Ramping up speed is going to ramp up power usage, but L2 and L3 could help. Intel couldn't scale down their performance to a low power chip, that's where ARM excels. But scaling up is problematic, so they scale out and the energy goes nuts. Realistially that Cavium is about the top you're going to see in terms of memory. They support 1TB in a dual NUMA config. That's less than Intel, with the E5-2600v3/4 you can get 1.5TB in a dual NUMA, which is the same as the Intel scalable stuff. Of course none of this matters for Apple. They're dumping 16/32GB in laptops. There isn't a need to run a lot of ram in a personal machine. I'm no Intel fan. All of our machines are Intel E5's, but that's not unique, 90% of the market is. I'd jump to AMD if compelling hardware was available at the price. You're right on Apple. The ecosystem is completely closed so they could switch to whatever. The question is what do they solve by doing that? If they used the same chip architecture for iPhone/iPad/Mac maybe they could cut some of their developers and save money? Have a unified experience? I know Cook seems to hate traditional computers and if they drop all of the iMacs and Pro things I can see this route eventually. And realistically for 95% of consumers this is the right path. For someone like my wife she never uses anything but her iPhone, that's the computer. Link to comment Share on other sites More sharing options...
Liberty Posted October 5, 2018 Share Posted October 5, 2018 I guess I'm just saying I don't necessarily believed that Cavium is as good as that architecture can be made by the best silicon team on the market with infinite money and access to top fab space, based on what they can already do for mobile, but only an actual attempt by Apple could show us where the chips fall. Also, the Cavium seems to have been designed to be a "data center" and "cloud" processor, so it optimizes for different things than a laptop/workstation CPU would, and might not be representative of what ARM could do if shaped in a different way. I don't think Apple would just stitch together a crapload of tiny cores in a laptop, f.ex., but rather would probably have 4-8 beefy cores designed for single-threaded performance, maybe with some low-power cores to take over when load is low (like in the recent iPhones). I think if they made the switch, it would be to be more in control of the technologies on which they depend (and not be at the mercy of Intel's roadmap and delays and problems, as they've been for a few years). They could probably also recoup some of the margin that goes to Intel since it probably wouldn't cost them as much to develop these as the billions that go to Intel over however many years those architectures last (but that's speculative), and they could differentiate more in non-performance vectors the same way they've done on mobile (incorporate stuff like the secure enclave, neural engine, dedicated co-processors for certain features to save on power, etc). They can already do some of that by including off-CPU silicon like the T2 and whatever ARM iOS-based stuff is running the touchbar in new Macbook Pros, but I think some features would benefit from being on-die close to the CPU (if they want to do more stuff with ML-acceleration). There are many other arguments against the switch, so it's certainly not a forgone conclusion that they will switch, but I think Intel's recent problems probably didn't help them keep the status quo. Link to comment Share on other sites More sharing options...
Liberty Posted October 8, 2018 Share Posted October 8, 2018 https://arstechnica.com/gadgets/2018/10/intels-new-performance-desktop-lineup-an-overclockable-xeon-9th-gen-core/ Link to comment Share on other sites More sharing options...
walkie518 Posted October 9, 2018 Share Posted October 9, 2018 https://arstechnica.com/gadgets/2018/10/intels-new-performance-desktop-lineup-an-overclockable-xeon-9th-gen-core/ In a break with its recent practices, Intel has reverted to using solder instead of thermal paste between the processor die and the integrated heatspreader. This is interesting...perhaps material scientists have unlocked a new kind of solder that doesn't crack? Link to comment Share on other sites More sharing options...
Liberty Posted October 22, 2018 Share Posted October 22, 2018 https://semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/ Link to comment Share on other sites More sharing options...
gfp Posted October 22, 2018 Share Posted October 22, 2018 Thanks Liberty - that's a good source. I had not heard of the semiaccurate site before https://semiaccurate.com/2018/10/22/intel-kills-off-the-10nm-process/ Link to comment Share on other sites More sharing options...
Liberty Posted November 6, 2018 Share Posted November 6, 2018 https://arstechnica.com/gadgets/2018/11/intel-announces-cascade-lake-xeons-48-cores-and-12-channel-memory-per-socket/ This should be able to run Crysis. Link to comment Share on other sites More sharing options...
Jurgis Posted November 6, 2018 Share Posted November 6, 2018 This should be able to run Crysis. LOL. Link to comment Share on other sites More sharing options...
walkie518 Posted November 6, 2018 Share Posted November 6, 2018 Intel's handling of spectre and meltdown is putting intel first and customers last b/c the software patches are expensive on compute resources, and they aren't going to recall willingly the last 20 years of faulty chips, intel is now requiring that the os opt-in to new security features, whereas old chips are not by default secure. worse, the "new" security features are actually the old security features...intel is passing the buck now, customers have to choose between performance and security net net, cloud providers will win on the security front as they can do more distributed computing across different kinds of chips whereas individuals using hybrid or on-premise systems will likely remain most vulnerable ethically, this seems like the wrong move legally, this evades blame and liability business-wise, maybe intel can get away with this and is taking that risk, but I bet sales decline as cloud providers try to distribute work loads further in the future via software rather than rely on one architecture with flawed instruction sets attached is intel's "mitigation" specs336996-Speculative-Execution-Side-Channel-Mitigations.pdf Link to comment Share on other sites More sharing options...
Liberty Posted January 24, 2019 Share Posted January 24, 2019 INTC Q4: https://s21.q4cdn.com/600692695/files/doc_financials/2018/Q4/Q4'18-Earnings-Release_final.pdf News Summary: • Fourth-quarter revenue was $18.7 billion, up 9 percent year-over-year (YoY); and full-year revenue set an all- time record of $70.8 billion, up 13 percent YoY. • Delivered outstanding fourth-quarter earnings per share of $1.12 ($1.28 on a non-GAAP basis); achieved record full-year operating income, net income and EPS. • In 2018, Intel generated a record $29.4 billion cash from operations, generated $14.3 billion of free cash flow and returned nearly $16.3 billion to shareholders. • Expecting record 2019 revenue of approximately $71.5 billion and first-quarter revenue of approximately $16 billion. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now