(I may have got that wrong so happy to be corrected!)
What calc did you have?
But fundamentally electrons and 'charge' are quantum in nature and even in the 80's if you're transistors were "too close" you could get electron tunneling between them. So I, and others, assumed there was a really hard limit on how small you could go (and it was likely above 100nm rather than below it). And of course now hardly anyone used old 90nm tech, I think 40nm is the current "jelly bean" process but I freely admit I've not looked at the geometries used in the vast Chinese IC market of "support" chips.
As someone else here said, fab nodes are more like Lego blocks than a printer’s dpi – moving a design to a new node means rebuilding it in terms of the new node’s building blocks, not just “shrinking the design”.
So if the M2 Pro/Max/Ultra are intended to follow the same “reuse the cores (possibly rearranged) with more GPU blocks/cache etc” it seems unlikely. But if the make a design break within the M2 series then it’s possible?
I’d expect Apple to follow their historical behavior and lead with the next A-series phone processor on the new node first. M1 Pro/Max/Ultra chips have huge surface areas – since process defect rates are driven down over time, it makes sense to start with your smallest chips first, so that you can get good yield out of your big chips once the defect rate is lower.
- A5(X) 32nm and 45nm
- A9(X) 14nm and 16nm
- A10(X) 16nm and 10nm
If apple wanted to use N3 for the A15 or the M2 then they'd have designs pretty much ready but due to TSMC delays didn't roll those into production.
I'm pretty sure Apple does a few designs in parallel and react according to what the supply chain and market conditions allow.
Indeed, the core designs were different. The M1 had A14 CPU cores, while the pro/max were based on the A15.
A14 —> M1
A15 —> M1 Pro/Max/Ultra, M2
So it stands to reason
A16 —> M2 Pro/Max/Ultra, M3
Though that does not take into account the process node.
'We had indicated in our initial coverage that it appears that Apple’s new M1 Pro and Max chips is using a similar, if not the same generation CPU IP as on the M1, rather than updating things to the newer generation cores that are being used in the A15. We seemingly can confirm this, as we’re seeing no apparent changes in the cores compared to what we’ve discovered on the M1 chips.'
And Wikipedia: https://en.wikipedia.org/wiki/Apple_M1
'The M1 has four high-performance "Firestorm" and four energy-efficient "Icestorm" cores, first seen on the A14 Bionic. [...] The M1 Pro and M1 Max use the same ARM big.LITTLE design as the M1, with eight high-performance "Firestorm" (six in the lower-binned variants of the M1 Pro) and two energy-efficient "Icestorm" cores, providing a total of ten cores (eight in the lower-binned variants of the M1 Pro).'
The efficiency cores on the Pro and Max aren't (so far as I can tell) faster than on the regular M1. But where the regular M1 has 4 performance + 4 efficiency cores, the Pro and Max have 6 or 8 performance + 2 efficiency. (Also, more L2 and L3 cache.)
This is true but 'rebuilding' specifically refers to producing a new chip layout (i.e. the thing you send off to the fab to manufacture). This can be a lot of work but is all 'back-end' work. You begin with the RTL giving the logical/functional design of the chip and implementation engineers push it through synthesis, place and route etc to produce a physical chip layout. This is what you have to redo, you can just start with the exact same RTL.
When you design that RTL with a particular node in mind you can likely achieve better performance/area but it's not essential to do so.
Plus when you want to do the back-end work you need a fairly complete design to work from. So for instance Apple could be getting a back-end team to build a 3nm M2 now whilst the front end design team are busy working on the M3 (specifically targeted and optimized for 3nm).
So not sure it's true that you can't shrink down a largely similar architecture from one process to another.
Obviously it's fallen off the wagon a bit here, but seems more due to operational issues at Intel than it being fundamentally not doable
The first step is to prove the process - to the appropriate level of control limits, and then the second step is to optimize the process.
You may say it's "significant", but if you look at it another way, it de-risks both tasks as opposed to doing both at the same time but having much higher risk.
So in my mind, the "significant work" is less relevant, as the derisking is much more important.
There was work for sure to move to a new process, but that wasn't necessarily duplicated work with the architecture being created on a prior process. My impression was that it was already parallelized, and that not having to develop new architectures on new processes prevented a fair bit of contention.
The hardware side's optimizing the process on the first cycle, then doing a new one on the second.
Can you imagine having to live this life? Port to a new platform every other release?
Actually thinking a bit more... there is the Apple VR headset which seems to be getting closer to production and I'm sure it could use the efficiency from the new node plus is priced high enough to warrant the costs. Some speak of a launch beginning of 2023 with 1.5M units produced. That would be bang on in terms of schedule.
So it isn't outside the realm of possibility that the A16 design is 3nm, and they will delay its launch and/or otherwise make it less desirable to deal with lower initial production yield.
The A16 will be N4P I'd guess. In my initial comment I should have said A17 for N3 instead of A16.
They can't release 3nm A16 this year. I'm pretty sure it's on something like N4P.
IMO, the plain M2 was a 3nm design that had to be backported to TSMC 5nm due to the node transition taking longer than projected, in the same way that Intel's Sunny Cove had to be backported from Intel's 10nm to their 14nm++++++ node.
The rumors now say that Apple is getting ready to build M2 Pro and M3 on TSMC 3nm.
>According to one analyst, they will be coming from TSMC, and will debut later this year. Even more tantalizing is the notion that these Apple SoCs will likely be the very first to use TSMC’s bleeding edge 3nm process. This is mildly surprising given the M2 chip revealed this week was made using TSMC’s 5nm process, just like the previous M1 products. Launching the M2 on two different nodes would require Apple to do the design work twice — once for a 5nm M2 and once for the 3nm M2 Pro.
My wild ass guess is that the M3 they are talking about here is the original M2 3nm design that was delayed.
e.g. no reason to believe. Everyone here is "an analyst" in that sense.
There’s an M2 Pro chip and an M3 chip in the works, according to industry publication DigiTimes.
Are they after investors or traders? Why they don't call it something like superlaser 9000 and the next year hyperlaser 10K speedmaster?
I originally asked this exact same question of the guy at AMD who started talking about their x86 chips in "equivalent" megahertz because they the actual clock rate was slower but they had a better instruction per clock (IPC) and so got more done per second than their Intel counterparts, but Intel was winning in the "Megahertz race" because that was what the press was fixated on using to describe the "leading" chips. Anyway, if you're a leader and and you can get the press talking about something your competitor isn't (or can't) do, then you "win" the perception of being ahead.
In semiconductors, this has been very effective at getting the press to see Intel as "behind" because their "nanometer" number was stuck in the double digits while "more advanced" fabs were already in production on lower nanometer number processes. (Note the scare quotes are all just to indicate topics that both TSMC, Samsung, and Intel would have different takes on the current state of the art here).
It's also still useful as an identifier to distinguish between one node process and another, so it's not entirely pointless, even if the nomenclature is meaningless.
No but enthusiasts (read: gamers) do. A lot of PC industry marketing nowadays is geared towards retail PC builders, who are very impressed by tech jargon.
There was a concerted effort some time back to formalize a standard measurement of process density using something like "million transistors per square centimetre" by a standards organization. (IEEE?) It wasn't a perfect measurement but it was a lot better than width. It failed so completely that I can't even Google it any more. The awkward name probably didn't help.
edit: it's "MT/mm2". Some people actually use it, but more in the informal sense that has the problem you espoused rather than the formalized one, which I still can't find.
We can always increase W, but decreasing L requires technological advancements
So I’m not sure on the value of area (W and L), L alone is more relevant (for the reason I said above)
we are dealing with finfets / GAA now which are not the same as planner transistor, but I guess there should still be relevance in L and W because these measurements are important even for resistors
So probably some sort of equivalence between a FF and a planar transistor should be given to name the nodes
https://en.wikipedia.org/wiki/3_nm_process gives a reasonable summary in its introduction.
My remembrance of the specific quantum effects that you're thinking of are from quantum tunneling of electrons. The problem occurs when the gate size gets small enough that electrons can pass through without the transistor being switched on, which starts to happen around 3nm.
> a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers
So if everybody believes in the nanometers, nobody cares.
Until you hit a wall.
> "Quantum effects typically occur well behind the curtain for most of the chip industry, baked into a set of design rules developed from foundry data that most companies never see. This explains why foundries and manufacturing equipment companies so far are the only ones that have been directly affected, and they have been making adjustments in their processes and products to account for those effects. But as designs shrink to 7/5nm and beyond, quantum effects are emerging as a more widespread and significant problem, and one that ultimately will affect everyone working at those nodes..."
> "“At very small dimensions of the body, the semiconductor band structure gets ‘quantized,’ so instead of a continuous energy spectrum for the carriers, for example, only discrete energy levels are allowed,” Mocuta said.
This quantum confinement has several possible consequences. Among them:
• A transistor threshold voltage change.
• A change in the density of states (DOS), or the number of carriers available for current conduction.
• A change in carrier injection velocity."
Think of how thick towels started as a manufacturing defect: a machine in a conventional cloth factory had a part break down, and instead of churning out the usual flat cloth it erroneously wasted a loop of yarn at each 'weave' (for lack of better words as I'm not into weaving). Having no immediate solution at hand to recover the yarn from the thick cloth was sold / distributed as scrap. The problem of the machine was identified and fixed. The users of the cheap scrap came back for more as they discovered the superior water absorbing qualities... Since the fault was documented they could intentionally reproduce the desired 'faulty' cloth.
What's been happening is that fabs have been exploiting more and more tricks to increase transistor density while still using the larger feature sizes. So flat transistors became finfet's, increasing their gate area and allowing chips to use fewer of them for the same silicon area, etc...
So read "3nm" as "a process with the same transistor density as you would expect had some ancestral ~90nm process been shrunk by a factor of 30".
I was dumbfounded when even hearing about 14NM YEARS before it was a thing (I also got to see 64-core concepts at intel in ~1999 or so)
3nm is mind boggling amazing.
Whatever happened to "voxels" (before the graphics term, intel was creating "voxels" that were used to use light to transfer vertically between layers... but I stopped following CPU arch years ago.
3nm means the smallest feature - eg, the width on a channel not the size of a transistor.
There are quantum effects at this level (and indeed parger), and one of the big challenges with process design is minimusing them. See  for an overview.
Similarly printing text at 600 dpi doesn't mean that the actual width of the stems in the letters is 1/600 of an inch.
It costs (within an order of magnitude) the same to build a modern fab as it does to build one for a process 1-5 generations back, maybe more. You have a roughly similar backlog for equipment too. For your troubles you get far fewer chips per wafer so your cost per chip is higher. And the chips are slower and use more power. That makes it much harder to get any kind of payback on a depreciating asset that only gets more out of date. You also risk demand for your new 45-nm or 90-nm fab dropping off toward zero in 10 years.
Historically older fabs would see a drop-off in business as new chips were designed for new processes so as time went on there was more and more capacity available for cheap on the older nodes. That cycle is and has been slowing down though so there isn't much slack even for older fabs.
I'm not sure where the market will end up. If the current shortages are a temporary backlog + hoarding then things will work themselves out within 1-3 years and anyone starting lots of fab construction risks bankruptcy - something that has happened multiple times in the past as keeping a fab idle is equivalent to burning it down so you end up having to dump chips at cost or even a small loss. On the flip side if the recent disruptions are merely accelerating an existing trend then anyone kicking off fab construction stands to make a lot of money.
You could make a microcontroller on a 3nm node if you wanted to, but first you'd have to design a new core, and then tell people to pay $100 per chip instead of $0.01.
TL;DR: the chip shortage is an economics problem, not a physics problem.
(Edit: Yes, trying to read things in a different way - "weird quantum artefacts" at 3nm have nothing whatever to do with the automakers' problems.)
(Edit2: Here is the point which I was originally trying to make: "Chip manufacturing, even at 3nm x ~30 = ~90nm, is still extremely difficult. That fact is a big part of why the automakers did not attempt chip manufacturing, even at ~90nm.")
I don't know why automakers didn't engage their partners here to expand manufacturing. I am sure they asked, and the companies that can build 90nm fabs decided not to. Maybe it doesn't make sense after the backlog is cleared, maybe they like the higher prices? And if a car company wanted to start manufacturing chips themselves, they'd have to hire engineers, license patents, work out bugs, etc. and the risk is that the shortage is completely gone after you do all of that. (And, all this during a pandemic. If they wanted to use wood to build the physical building containing the fab, there was a shortage of that. So, a lot of problems to solve, and 10 billion dollars starts looking like a small number.)
CEO: Okay, we will have the 20nm node ready by Q4 next year, right?
Engineer: No, you see, there are quantum effects...
CEO: You're fired!
(Q1 next year:)
CEO: Now, we need to have this 20nm node ready by the end of the year!
New engineer: Sure, we will have the "20nm node" ready by then.
Threadripper seems insanely expensive right now, will the next generation be faster at least or use less energy? Or, in other words, does it make sense to wait?
It has some added instructions (AVX512) and will likely be more efficient at the same clock rate but they will be pushing the clocks higher instead.
I would not wait around unless you have plenty of time. I doubt a cut down Genoa will show up on workstations until the server market is satiated.
Maybe do it like this: plot nm vs density for "properly" named old nodes and extrapolate further for current nodes.
Completely naive about this level of engineering, but very interested :)
For example (just to chuck some more entropy on the pile) googling "tesla 5nm" finds a bit of an indication they're doing things in that space.
But there's probably also an example of some phone chip that has Cortex A53 cores implemented on two different transistor libraries to hit significantly different performance/power points.
AnD wItH tHe sAmE nUmBeR oF nAnOmEtErS tOo lol
The solution is simple: treat their numbers as brands. A Core i9 is generally better than an i3 of the same generation, but where a Ryzen 5 is compared to that is anyone’s guess (depends on the exact models, generations, etc).
Node sizes have had nothing to do with a characteristic length for quite a while now, that ship has sailed.
For example the scaling factor from from say 5nm to 3nm for transitory is X. But for SRAM, things have been getting progressively worse so it often X/2.
And with caches going up you end up using a process node terrible for sram, but needed for cpu transistors, and it’s a huge waste. You can see why you might use a different process tech for caches, and AMD is clearly going this direction.
Anyhow, TMSC 3nm is just a marketing term. On the physics side, it has about 7-8 key differences to previous 5nm. That’s 6-7 too many things for people to care or remember about.
Intel used to use a part of the transistor when it takes about scale. But then the geometry of the transistor changed! That number no longer has meaning. And transistor geometry changes over time, and you have multiple to choose from on the same process node. Oops.
So… back to marketing terms.
The only people upset with this are benchmark warriors. People in the industry or familiar enough know how to navigate the characteristics of a process and do not need one single neat number. Customers by and large don’t care: business are after the cheapest and best supported, and consumers are mostly price-sensitive except for higher-end brands that don’t communicate at all on this sort of things.
This would not solve any of the industry’s problems.
I think there's problems with that metric too. Not all transistors are created equal. Depending on the switching speed that you want, how much leakage current you can tolerate, etc., I bet you can vastly change how many transistors you can fit in a given area.
See how there are six foundry provided cell libraries that make different performance, power and area trade-offs. Even though it’s all 130nm.
There are even more libraries than that too. Like the OSU one that makes even different trade offs.
This won't help in terms of clarity of how good the transistor is. As others mentioned, there are different kinds of transistors and different manufactures would still use their smallest (read not necessarily the fastest/strongest/mostly used) to market their process.
A major reason I think this won't work is that transistor size no-longer limits the density of the chip. Nodes below 20nm have transistor contacts (what connects the silicon to metal), and metal tracks that are much larger than the transistor. The contacts typically limit the pitch between transistors and hence the density. A lot of innovation is now done to shrink those elements rather than work on the transistor physics/materials/size directly.
Layman question from not GP, but wouldn't reducing the "overhead" around transistors increase the transistors-per-area metric and thus be at least somewhat useful?
As you shrink the contacts and the metals their resistance and capacitance exponentially increase. This means that both your power will go up and your speeds will go down. Also the process becomes more prone to manufacturing errors. So shrinking those elements blindly without innovation just to increase the density numbers is not really a good metric of the quality of the process.
transistor switches /( s * mm * Joule) ?
Koomey's law seems to hold strong.
I work in Analog so I'd like a more analog based definition (which also works for digital tbh) based on size or density, trans-conductance(gm), output resistance (ro), and Unity frequency (fT) but that's never going to happen :<
Ideally you'd have some sort of metric that includes power efficiency, performance, and maybe even cost since smallness isn't really a feature in itself unless you're making something space-constrained like a hearing aid. It's hard to condense that down into a single number though.
I don't think this is fair though. Some processes may support finfet, or gaffet, or some neat trick, where, if you switched to the process, but didn't optimize for size, you may be able to double the performance at half the power.
Like you suggest, it's some crazy multidimensional problem space. There's no hope in representing it with one number. And, nobody that's actually designing would care about any of these numbers. In the end, they would only be used by marketing, which is all they're used for now.
Well, you technically can, but it's not starting with a thicker wafer, it's growing a second (or more) layer of dopable silicon on top of the normal doped silicon/transistor gate insulator/wiring/more wiring/even more wiring stack of the chip, then adding another full stack on that new dopable surface. Pretty sure the fabrication infrastructure for that makes conventional, GDP-of-a-small-country photolithography fabs look cheap by comparison, though. Plus you'd be at least squaring (and probably much worse) the already not very good production yield.
Actually, you probably can't; the distance between the sides is practically astronomical compared to the horizontal feature size. I don't remeber the exact actual dimensions, but if you assume a a 1-nm trace is equivalent to a 10-meter road, then a 1-mm wafer thickness would be equivalent a 10'000-km planetary diameter, so you're getting close to the scale of routing a internet cable through the center of the earth. (At least it's not molten, I guess?) And doping generally works by diffusion, not drilling a hole.
I naively assumed you could get an interconnect through, by taking up a large area of silicon. :)
I think a better question to ask is whether the underlying economic factors behind continuous process tech improvements are healthy. Is there enough value-add to the final user by continuous process tech improvements? Are the costs for that improved process tech scaling with the value-add? And is the competitive landscape healthy? While that holds true, companies will keep looking for process tech improvements to give them a competitive edge.
In the 80-90s this was very much true but in recent decades it was to a lesser extent hence why we see consolidation/reduction in the number of foundries, foundry services to amortize the cost of older node-tech, and R&D going to companies/partnerships that can capture the most end user value-add (Apple/TSMC).
Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade. There are a lot of process tech innovation that can lead to better performance that are not necessarily scaling related. In fact, scaling transistors stopped being very useful a while back afaik due to the breakdown of Dennard's scaling.
I agree with this and I suppose I just sort of reached for Moore's Law out of habit and maybe a bit of laziness. Thanks for articulating the question more appropriately.
>"Looking forward I think the economics are very healthy with a design-house/foundry service model that we have right now so I would guess that Moore's law (or some approximation) will continue for the next decade."
This was what I was looking for. Cheers.
It isn’t actually a law, it is more of an observation / prediction
But people have been inventing new technologies to be able to still improve chips (for instance, finfets and now GAA)
The only current products that are really hurting for better chips are high powered wearables like watches, VR headsets or AR glasses. The next few years should see some tangible improvements in those products, but probably not much difference in desktop, laptop or phone. Datacenters, since they run at high utilization rates, will continue to take advantage of the cost savings of slightly more performance per watt and dollar per transistor costs.
So pretty big deal for computing. We may be close to the age where most laptops are fanless, for example.
Most non apple devices still on 7-11nm equivalents since apple gets first dibs on new gen fabs from TSMC (pay for the right)
Doubtful since both Intel and Apple are actually INCREASING power requirements over time. Look at Apple M1 vs M2. Intel i7-1185g7 vs i7-1280p.
The manufacturing process matters far more than whatever tweaks they make intra-generation. M1 is 5nm, Intel is 7-11nm
That's not what Anandtech found when they tested the performance and efficiency cores used in the A15 and M2.
>In our extensive testing, we’re elated to see that it was actually mostly an efficiency focus this year, with the new performance cores showcasing adequate performance improvements, while at the same time reducing power consumption, as well as significantly improving energy efficiency.
>The efficiency cores have also seen massive gains, this time around with Apple mostly investing them back into performance, with the new cores showcasing +23-28% absolute performance improvements, something that isn’t easily identified by popular benchmarking. This large performance increase further helps the SoC improve energy efficiency, and our initial battery life figures of the new 13 series showcase that the chip has a very large part into the vastly longer longevity of the new devices.
Correct me if I’m wrong but high performance fanless laptops are only possible with ARM chips atm. Has any other big laptop chip designer even announced plans for anything remotely close to M1/M2?
M1 is the only 5nm processor on the market right now. Yes, you will see quite comparable performance once Intel and AMD get to 5nm as well
So extrapolating this we expect a 1nm (if this is even possible) would offer roughly the same amount of gain. Not sure what we can have beyond 1nm
N2 in 2025. N1.4 or may be N14A in 2027. Roughly N1 in 2030. I believe we will hit an economic wall by the end of this decade. i.e The cost of Next Gen node exceed what the market are willing to pay for it.
Minecraft contains some native dependencies, though; you'll need something like https://copy.sh/v86/ or https://bellard.org/jslinux/ with the right operating system image to run it in browser.
Apple has been selling phones and laptops with N5 for two years and I don't think we still have any competing product in the hands of consumers using N5 yet. If I'm not mistaken Nvidia and AMD are about to release products using N5.
But this is also a free market.
Apple is both willing to pay more and has the needed scale to buy up all capacity. If someone else wanted to both pay more and buy up all capacity - they have the ability to do so.
If - in addition to what was described - part of the deal is that Apple would only make the deal if TSMC would not add any more capacity for 2 years NO MATTER WHAT - that is blatantly anti-competitive and monopolistic.
If no one else is willing to pay for TSMC to build another facility to produce more chips - then that's a free market. Apple was the highest bidder and won.
The essential facilities doctrine might apply (the question isn't whether competitors are literally forbidden from the latest chips, it's whether they're "practically or reasonably" unable to access the latest chips, and all evidence suggests that's currently the case). Possibly it's the competitors own fault for refusing to pay the entirely reasonable price TSMC is insisting on, and perhaps Apple's contract with TSMC actually has carefully written clauses to ensure others could practically gain access without having to spend enormous amounts of money, but which nobody has been willing to use for their own reasons, but it certainly seems possible that the contracts effectively ensure Apple have sole access to the latest hardware, which would line up neatly with the apparent scenario we're in - that Apple currently has sole access to the latest hardware.
That's not to say it's a slam dunk case either. I wouldn't care to guess either way.
Shutting out all competitors from new technology for years by outspending them (arguably from profits largely made via other monopolistic like behaviors) could qualify for many.
I'll never understand the endless parade of apple goons who rush to defend them at every turn. Use some critical thought instead of treating it like sports
I'm describing a company that is able to pay far higher than competitors due to anticompetitive practices which drive their margins far above what would be realized in a truly competitive environment.
I'd gladly like to see the alternative timeline where Microsoft charged 30% for all software on windows and where you and others shamed them for it. Because I know for a fact all of those defending Apple here would have at the time
And if you look at their revenue breakdown, it’s clear that most of it comes from hardware.
When an ecosystem becomes large enough, societally impactful enough, and is controlled exclusively by one entity, that qualifies as a monopoly within that ecosystem.
You remove any capacity for competition to drive down margins. Apple can ban Spotify tomorrow to promote apple music on their devices, for example. Not anti-competitive in your mind?
You have two gated monopolistic ecosystems, choose the one you want to be gouged by. In a competitive environment, margins get compressed close to cost to deliver. If a company can achieve huge margins on items that could otherwise be much cheaper to deliver in a competitive environment, that qualifies as anti competitive.
How much does it cost the supplier to host apps via CDN? Or to process a single payment? Apple can take as much margin as they want on these because nobody else is allowed to provide those services within their ecosystem
The law will adapt to this, it's quite obvious to those that are unbiased.
The amount of REVENUE Apple makes from iTunes is likely less than $5Bn per quarter.
Apple is not using their iTunes monopoly to buy chips.
They're using their insane amount of phone sales (at absurd profit margins) to buy chips.
Again, I hate Apple as much as anyone. Anyone who thinks Apple is a monopoly in phones is a moron. They don't even own 30% of the smartphone market...
I agree Apple is anti-competitive within the ecosystem of the iPhone. That's not going to change their ability to buy all of TSMC's chips.
Is there some number of gated ecosystems that we get gouged by that would be considered acceptable and non-monopolistic? Three, ten?
When a platform reaches a sufficient size (subjective), it must be opened to competition for services provided within that platform.
We're talking about the base operating system for global computing here, not a car's dashboard system
This sounds similar to how Torvalds has a monopoly on services within mainline Linux?
You can poll your friends, colleagues, and relatives - how many people use iPhone vs something else. If % is more than 50%, then apple is monopoly effectively
same with mobile tablet market, just make a table of how many ipads vs alternatives.
It only makes sense to do this on the level of a governing body, because the term 'monopoly' is defined by governing bodies and the remedies against it are enforced by the same.
You can cherry-pick / gerrymander any group you want to make it look like Apple has a monopoly by the numbers, but in order to make it stick you'd need to demonstrate that Apple fixes prices (they don't, their phones cost ~2x or more what competitors' phones cost) or that they're somehow excluding other competitors from access to the phone market (they're not).
However, iPhone is not a commodity, it comes bundled with an OS and plethora of apps and services, the whole ecosystem.
per this report ( https://www.imore.com/apple-takes-75-smartphone-profits-desp... ) it says apple is taking over 75% of profit pool of the market, despite selling only 13% in units.
If we identify monopolists using bottom-line approach, then Apple will be a clear monopolist. As would be Facebook and Google. as they should be, if lawmakers truly cared about public good
Per the parent, the challenge is to convince governments to take your proposed approach. Repeating the argument to try to convince a HN crowd is not just non-productive, but counter-productive if it doesn't move toward this goal.
Taking away property and/or converting investments into a utility when a business becomes too successful results in a lot of nasty byproducts and most governing bodies will shy away from this.
Instead, you likely see the issue completely side-stepped. Nobody truly cares about labelling Apple a monopoly or declaring iOS to be a somehow accepted as a market onto itself - they want a change in behavior.
The challenge is how to side-step this when per the letter of the law, they have literally done nothing wrong. For instance, the EU will likely get a lot of pressure due to new regulations being solely against US-headquartered countries for the primary benefit of companies in the EU.
Actually, if this were to happen back in the days, we would have everyone on Linux Desktop already...
>I'll never understand the endless parade of apple goons who rush to defend them at every turn.
May be because people are not defending Apple but actually giving the correct account of the story?
I dislike modern Apple, or Tim Cook's Apple. But doesn't mean I would agree what they are doing ( in the context of TSMC ) are monopolistic. Without Apple, there would be no N3 by 2023. It is as simple as that. Apple in this case is not a customer, but more like a partner who are willing to invest ( and take the associated risk ) and spend to get and push for the latest technology.
Backing up though I think it's interesting that Apple has so much money that they can afford to buy up leading nodes for years. Why do they have that much money? I think it's obviously because of anti competitive behavior elsewhere such as their App Store which basically gives them a money printer.
The free market sucks. People need to stop valorizing it.
In a purely free market, monopolies can form and exist in perpetuity
To have a truly free market it should be accounting for this edge case - that is once a company grows beyond certain size it should by law be split.
There should be other provisions - for instance if the big corporation causes a legal trouble, they should match the legal team of they claimant - for instance, if someone is suing Apple because of an issue with their hardware and Apple sets aside £100m legal budget - by law they should be required to set £100m for the claimant, so there is level playing field. That of course only if the judge decides for the case to go on.
It would certainly have the potential to make the market work better and improve competition, but that's not what a free market is.
Corporations also can't do what they want - for instance they cannot commit fraud, evade taxes, sell dangerous products and so on.
You have something like that in sports, when in order for participants to compete freely, they are being tested for illegal drugs, so that no one has a unfair competitive advantage.
Free != do what you want.
> an economic system in which prices are determined by unrestricted competition between privately owned businesses.
"Unrestricted competition", to me, clearly says that businesses should be able to compete without such restrictions as being split up by the government if they become too big.
Or this definition from investopedia (the first actual result on Google):
> The free market is an economic system based on supply and demand with little or no government control.
The government splitting up businesses is certainly a form of government control.
However, Wikipedia's first paragraph is:
> In economics, a free market is a system in which the prices for goods and services are self-regulated by buyers and sellers negotiating in an open market without market coercions. In a free market, the laws and forces of supply and demand are free from any intervention by a government or other authority other than those interventions which are made to prohibit market coercions. Examples of such prohibited market coercions include: economic privilege, monopolies, and artificial scarcities.
Apparently, the government imposing control and restricting competition is part of Wikipedia's definition of a "free market", so extremely harsh anti-monopoly and laws against anti-competitive behavior fits perfectly well in Wikipedia's definition, but makes the market less free according to the definitions I've been following.
The important part is, I agree that a lot of government intervention is required to make markets function efficiently. I just personally wouldn't call that a "free market", rather a well-regulated market.
Also, Apple didn't make intel fumble their 10 nm process or Qualcomm to miss aarch64. Result of anticompetitive practices of intel and Qualcomm made them lazy and that resulted in this, not Apple buying TSMC capacity.
How is that not free market?
As a consumer, I can now go out and buy a machine with Apple silicone, Intel, AMD.
If anything, we're finally seeing some improvements after years of chip stagnation.
I'm not sure if node exclusivity was why Apple has the first true wireless earbuds, there was one competitor first with some giant ones so I suspect so, but lots of competitors followed fast (other foundry's nodes were more competitive then). Anyone know the history on that and what nm chips were used by them and earliest competitors?
Apple was just smarter and convince consumers to buy its products at a premium - just like with iPhones.
Consumers couldn't buy products with a pocketable harddrive: Apple had exclusivity agreements with all manufacturers of them to completely monopolize their use for music players.
Also remember that the iPod wasn’t an immediate hit and it only sold 1 million in total in its first two years. It wasn’t until two years later when it was made available on Windows.
Apple did the same with Flash storage before the iPod Nano came out. Any of its larger competitors at the time could have had the vision to see that flash based players were the future and ensure they had the supply.
There are multiple fabricators (supply) and a small pool of chip manufacturers (demand).
As a chip manufacturer, I cannot go out and buy competitive silicon. I can't even get last-gen silicon. The best you can get is an improved 7nm node from Samsung, but nobody wants to run their CPU on the blood and guts of Exynos chips. So, by definition, I'd say that Apple has monopolized the cutting-edge lithography business through leveraging a partner.
It isn't like Samsung and AMD can't afford to jump on new nodes, they just won't be able to raise their prices enough for it to make sense to them.
Apple pays more hence they get the product
If someone else wants to pay more, they can get the product too
There are other foundries too, they could get their phone chip done with Samsung on a similar technology
What is true is that outside of TSCM and Samsung, you don’t have options for high end products
The fact they got more money than competitors and leverage that to get exclusive control doesn't matter much. Textbook definition of a monopoly. On legal side, IANAL, no idea if this could be illegal somewhere.
They will just get the first chips — maybe for the first year of production, idk
Whatever you want to call it, it's pretty anticompetitive. Seeing them do the same thing for 3nm doesn't inspire confidence that they're going to do the right thing this time around.
Except for the first year where they take a headstart where nobody else can either have matching chips or figure out what issues come out of this process, making them get an even bigger advantage as they have stabilized their process on year 2 when everyone is barely building up.
Don't be a clown. You know fully well what Apple is doing.
If other fabs were available, apple's ability to pay up would not be leading to accusations of anti-competitive behaviour.
If I ignore reality and make up non-existent fabs, the extremely monopolistic, anti-competitive behaviour is not a problem!
>they are paying what they can afford.
The problem is not paying what they can afford, it's paying that in addition to forbidding TSMC from increasing their capacity on that node while the contract is ongoing. It's both paying what they can afford, and paying so that others can't afford it.
If some other company is willing (or really: able) to make a competing offer, I’m not aware of TSMC not being interested in that. It just happens that the number of companies with the margins to spend what probably amounts to the GDP of a small nation is approximately zero (or one if you count Apple)
Except they didn't have exclusive access? Huawei also bought 5nm. Intel contracted 3nm along with apple before backing out.
Is there any proof that this is actually true? TSMC is currently building few new factories, I suppose they will produce 3nm chip in them.
I am sorry but that is simply not true. Sigh.
Except that Apple don’t supply or trade in chips. If they were buying up chips and hoarding them it might be considered a monopoly.
The seller of Top Quality Material are in no position to sell their A grade material to all buyers.
I think the issue is that the investment costs are so high that this ability is limited to a very small group of companies, as is the required knowledge.
I don't really know where I stand on this myself; in principle I agree with your point of view, but I also think it's a little bit too simplistic.
This began more than 10 years ago when Apple was sitting on a massive pile of cash. What do you do with all that cash? Most companies will simply pay dividends or, more commonly now, do share buybacks (side note: it's become popular to view share buybacks as some kind of systemic problem but they're functionally no different to dividends).
Instead Apple engaged in vendor financing. A lot of components Apple needs requires massive capital investment. Companies might borrow money for this. Apple essentially became the bank, saying we'll give you the money for this. In return Apple gets some combination of preferential pricing, guaranteed availability or exclusivity for a certain period.
Eventually Apple doesn't even need to spend money to do this. Just the commitment for a buyer the size of Apple to purchase your output can have the same effect. It'll help secure financing and Apple can extract the same preferential treatment for that commitment.
This is the logistics side of Apple's business and Tim Cook was the architect of that.
AMD used global foundries
Idk about mediatek, could be
What started the TSMC “boom” was apple
 https://www.techspot.com/article/653-history-of-the-gpu-part... "The TNT2 utilized TSMC's 250nm process and managed to deliver the performance Nvidia had hoped for with the original TNT."
Ignoring the 2014 outlier (2013 was the year Apple started using TSMC) the growth curve looks pretty uniform for the last two decades.
But my reasoning was:
* Most importantly, TSMC's market share grew uniformly, and they already were quite dominant before (market shares: 2013: 49%, 2014: 54%, 2015: 55%, from )
* I don't know whether that additional revenue is from Apple. Apple's use of TSMC has to have started in 2013 if they shipped the products in 2014, and to my knowledge didn't end in 2014.
* If Apple could use a node size in 2014, actual development of that has to have started quite a few years before.
Oh, I see. Makes much more sense than getting $60B out of the blue :)
Really, the biggest semiconductor manufacturer in the world with tons of customers cannot exist without Apple ?
TSMC started becoming relevant because apple switched from Samsung to TSMC way back in in the second iPhone if I am not mistaken
They started getting a lot of cash and investing it back in RnD, and with time got to where they are now, leading the industry by far with only Samsung capable of providing somewhat competing technology but with much smaller market share
What's the difference between having an agreement with a factory and having a better employee?
Apple pays the money and takes the risks, so they get the rewards.
Takes what risks ? Buying the riskless new transistors that TSMC puts out because they know that they work ?
Apple taking risks would be making the chips themselves (see: the failure that was itanium and how it screwed Intel because they had to reorganize their facilities for it), not buying it wholesale and just having TSMC print it.
And if you think that risk is small / nil I would point you to Intel's recent track record.
And just to point out that Apple actually does bear the risk you've highlighted as Itanium was a design (which Apple does) not a manufacturing failure.
You could say the same thing about Intel until they blew it with 10nm. Who knows what the future holds
Also blaming Intel's process problems on Itanium (discontinued 2011) is - well an interesting perspective!
The issue seems to be these other companies being ready to port their designs. I have not seen any report that Apple is undertaking anti-competitive behaviors in order to secure the entire generation of wafers.
Up to now it doesn’t seem like anyone is willing to pay.
I suspect the competition really is “ok” with waiting / don’t want to pay.
Idk if Samsung has an equivalent to TSMC 3nm though
Qualcomm saw big gains in efficiency and performance by moving from Samsung to TSMC. Various high end Android phones use this chip already.
Qualcomm's current flagship mobile chip, the Snapdragon 8+ Gen 1, is on TSMC 4nm.
And then it's not like the day after, Apple could produce iPhones or Macs with those chips. It takes months from having the first chip in hand until production churns out volumes of finished devices.
Might also lead to some awkwardness for Apple in having to port a design running on an older node to a newer one, if M3 does use 3nm.
By all accounts, and if the pattern holds true, the M[n] chip is generally the scaled up version of the latest A series design.
So if we have a release of A15 on 5nm, and the M3 is going to be based off that design, but ported to 3nm, that's kind of an interesting situation.
The lead time would be enough, that I doubt M2 was ever going to be a 3nm chip.
I can’t actually find anything other than ‘analyst states’ type links though.
TSMC call it N3 instead of 3nm for a reason. It's not 3nm.
Of course once they were beyond the clockspeed-above-all Pentium 4, Intel themselves stopped focusing on marketing clockspeeds and things gave way to basically arbitrary naming/numbering schemes. (Or to some extent, core counts.)
When intel was dominant (pre-Anthlon XP and pre-Core era), Clock speed alone could have been used to measure performance. Then, AMD Athlon XP come out with higher IPC than Pentiums, but the customers were used "Higher clock, higher performance" even though Athlon XP was faster at the same clock speed.
So, they've changed model names to indicate performance parity, that was the game intel played. They didn't lie or mislead about clock speeds anywhere.
And of course it's not just Intel marketing this way, everyone does, but their more recent naming change makes it more transparent. Tired of years of saying "welll, TSMC's Xnm (or X number with no units that's sort of implied to be nm) is really equivalent to our X+Ynm," they just go for marketing on parity.
Like I said in the start of my previous post, this isn't actually inflating (or deflating as the case may be) but it's just taking a concept that's widespread among the consumers as a proxy for quality/value and marketing to that idea rather than any actual physical characteristic.
The knowledge and education backlog is way too high for the West
To be fair, most of the equipment used by TSMC is made by European companies such as ASML.
It's an insurance policy. TSMC/Samsung cannot function without ASML and ASML cannot exist without their demand.
what do you mean exactly? you meant reunification? i don't see how that could be a problem, we do business with China already, it would just become one of their state, you can still trade with them
Or you meant as a result the US would not be able to steal the fabs, tech and workforce? (and by steal i mean purchase, just like we write about how China is stealing our tech when they purchase patents or by other deep internal means)
Samsung is in a worse position than intel, not only it is Korean, and they have problems with their northern neighbor, as well as with Japan, but they also got hit by the anti-competitive behavior and trade war from the US, and were told to stop development of their in-house chip (Exynos), to place Qualcomm as a western leader, despite their poor tech
And i'm not sure if The Verge is a trustable source, they either have insider knowledge or their source is manipulative, because that contradicts with the Exynos story
If what they claim is true, then that'll be interesting to see if their "promise" can become a reality, time will tell
But as a matter of fact, still nothing is happening in the west, and that was my entire point
I'm sorry what? A chinese invasion of Taiwan would be catastrophic for the global supply chain. It would require a force a few times bigger than the entire D-Day invasion, and there's no way the chip fabs wouldn't be smouldering ruins afterwards. The reason china hasn't already invaded is that it would have a nearly incalculable cost for very little benefit. Far better to rattle the sword for the nationalists every so often and trade your way to prosperity.
But i don't think that's what most Taiwanese want, it would be a literal suicide, it's a small island, they see it as a political affair, not as a military one
The military view is what the western media want to push for some reasons
We seen it during the Pelosi visit btw, what's next? help elect a Zelensky bis? It failed with Abe, wind is blowing backward it seems
In the same sense that Russia is trying to get Ukraine to reunify, sure.
I use the literal term of what's happening between China/Taiwan, that's it
Objectivity is what drive me, there is no winner if it is built on lies and manipulations
It is not 3 nanometers according the Metric System anyway.
Mainland flags on the Taiwanese monuments. How many months away, let me check my predictions, I think 70 days or so, give or take 15. Shit getting real hot real fast.
Who needs phones this fast?
To give you some answer. I'd say that newer technology enables the following, completely not exhaustive list:
- Macbooks that can run on a single charge for whole day and have very high performance.
- Cameras in pocket/phone that almost no-one except professional photographers have to carry photo equipment
- Electric cars which can finally travel such distance that they are actually useful
- Combustion cars so that every city we live is not toxic. And they get to be economic too.
- Comfort within a car which we value.
- Phones are small (oh sorry this can cause flamewars, so let's say it like this - circuits get small)
- Technology demand, whether it comes from manufacturing phones or something else, benefits making computers (i.e Apple Silicion was for phones/tables, now for macs). It benefits other vendors as they get to access the technology too. So much investments enables TSMC to actually enable the technology.
Plus, FPGAs get absolutely huge with terrible yields for complex work. RED Cameras have a massive, multi-thousand-dollar FPGA (crossing over $10K as a part), and it is only powerful enough to process their custom video codec and nothing else. Still cheaper than designing a custom chip for that considering how niche RED Cameras are, but absolutely stupid for anything broader. An FPGA that could run an M1 equivalent would be larger than the laptop, cost over $100K, and be slow as a turtle racing across Oregon.
It's just the perfect storm.
Everything else is want and as a society, we agreed that you don't need to justify your want.
Whether that is a bottle of beer or a new smartphone.
Programmers who cut corners while writing phone software.
The point I was making is that it's not a phone anymore, it's a full fledge computer that is arguably more advanced than your average desktop.
Do we need more?
The two-day mainstream smartphone phone is not here yet.
Everyone gets habituated by context to look at the thing more and more as the gateway to everything.
It's not just for phones--these nodes end up powering everything. But it starts with phones because that's where the money is.
Would you rather to have Intel keep pushing 14nm+++?