Thrawn89

Thrawn89 t1_jacrxbn wrote

The IRC allows this, but you need to provide fireblocking with an approved material (1/2 gypsum board, 3/4 ply, fire spray foam for gaps are most common). The blocking material must be every 10 feet both horizontally and vertically as well as at every horizontal to vertical cavity transition (ie. wall intersects with soffit).

This means that you'd need to add 3/4 ply or something to close the gap at the top plate to the exterior top plate, and you'd need to add this every 10 feet along the wall between the interior stud and exterior stud from the sole plate to the top plate. Though I think stuffing fiberglass in the gap is allowed if it's secured mechanically.

8

Thrawn89 t1_j6no21t wrote

GPUs are not better at floating point operations, they are just better at doing them in parallel as per SIMD just like any other operation benefitting from SIMD.

In fact floating point support is generally not quite as good as CPU. Some GPUs do not even natively support double precision or natively all floating point operations. Then there's denorm behavior and rounding modes that have been scattered across each implementation. Many GPUs take short cuts by not implementing a full FPU internally and convert to fixed point instead.

−1

Thrawn89 t1_j6nkd1y wrote

True, SIMD is absolutely abysmal at branches since it needs to take both true and false cases for the entire wave (usually). There are optimizations that GPUs do so it's not always terrible though.

It sounds like you're discussing vector processing instruction set with 512 bits which are very much specialized for certain tasks such as memcpy and not much else? That's just an example of a small SIMD on the CPU.

1

Thrawn89 t1_j6myw1u wrote

The explanation you are replying to is completely wrong. GPUs haven't been optimized for vector math since like 20 years ago. They all operate on what's called a SIMD architecture, which is why they can do this work faster.

In other words, they can do the exact same calculations as a CPU, except they run each instruction on like 32 shader instances at the same time. They also have multiple shader cores.

The Nvidia cuda core count they give is this 32*number of shader cores. In other words, how many parallel ALU calculations they can do simultaneously. For example the 4090 has 16384 cuda cores so they can do 512 unique instructions on 32 pieces of data each.

You CPU can do maybe 8 unique instructions on a single piece of data each.

In other words, GPUs are vastly superior when you need to run the same calculations on many pieces of data. This fits well with graphics where you need to shade millions of pixels per frame, but it also works just as well for say calculating physics on 10000 particles at the same time or simulating a neural network with many neurons.

CPUs are better at calculations that only need to be done on a single piece of data since they are clocked higher and no latency to setup.

2

Thrawn89 t1_j6mvpr4 wrote

It's a great explanation, but a few issues with the metaphor's correctness.

The kids are all working on the exact same step of their individual problem at the same time. The classroom next door is on a different step for their problems. The entire school is the GPU.

Also replace kids with undergrads, and they don't work on 1+1 problems, they work on the exact same kind of problems the CPU does.

To translate, the reason they are undergrads and not mathematicians is because GPUs are clocked lower than CPUs so they don't do the individual work as fast. However the gap between mathematician and kids was a little too many orders of magnitudes.

Also, they do work on the same complexity of problems, GPUs have been more heterogeneous compute platforms than strictly graphics since the programmable shader model was introduced making them Turing complete. Additionally, the GPU's ALU and shader model is as complex as a C program these days.

The classroom analogy is what DX calls a wave and each undergrad is a lane.

In short there is no large difference between GPU and CPU besides the GPU uses what is called SIMD (single instruction, multiple data) architecture which is what this analogy was trying to convey.

Programs either CPU machine code or GPU machine code are basically a list of steps to do. CPUs run the program by going through each step and running it on a single instance of state. GPUs however, run the same step on multiple instances of state at the same time before moving onto the next step. An instance of state could be a pixel or a vertex or just a generic compute instance.

27

Thrawn89 t1_ix3li31 wrote

We live in a society. Social customs exist so that people are used to interacting with each other in a relatively consistent way. Everyone acts differently, sure, but when a social customs is violated, it's extremely noticable. They don't always have practical use or make sense.

Knocking before entering is a social custom that people follow even though they intend on opening the door without a response.

3

Thrawn89 t1_islv3lj wrote

I think the question is based on a false premise. What makes you think strep throat can't cause necrotizing fasciitis? A more common complication from strep is scarlet fever which can also be deadly. Like most infections, they need to be watched and potentially treated if they proliferate.

1