Viewing a single comment thread. View all comments

cesium-sandwich t1_ja2c73b wrote

.and a CPU+GPU that can generate a full frame of AAA graphics in under 1 msec. Good luck w that. Same thing with apples "retina" displays. Yeah they are nice to look at.. but it's REALLY hard to feed a high res image at any decent framerate.

Doubling the frame size means quadrupling the GPU framebuffer size = 4x more horsepower to feed it.

42

quettil t1_ja2w81y wrote

Is it not possible to do some sort of real-time interpolation between frames?

4

oodelay t1_ja4k4z1 wrote

Yeah but it becomes interpolation, not the real thing. It's gonna make the soap opera effect so much worse.

13

theinvolvement t1_ja2lmab wrote

What do you think about fitting some logic between the pixels at the cost of pixel density?

I was thinking it could handle some primitive draw operations, like vector graphics and flood fill.

Instead of trying to drive every pixel, you could send tiles of texture with relatively low resolution, and use vector graphics to handle masking of edges.

0

asdaaaaaaaa t1_ja2pr26 wrote

I'd imagine the more steps in between "generate graphics" and "display" add a considerable amount of latency. From my understanding we're already at the point where having the CPU physically close to related chips (memory's one, IIRC) makes a difference. Could be wrong, but from my understanding the last thing you want to do is throw a bunch of intermediate hardware/steps in the process if you can avoid it.

9

cesium-sandwich t1_ja2ps0i wrote

There are some economies of scale involved.. especially for high density displays,
The GPU does a lot of the heavy lifting..
But even simple-ish games often take multiple milliseconds of CPU time to simulate One frame, and that doesn't transfer to the CPU, so doubling the framerate means half the physics+gameplay+cpu calculation since you have half as much time to do it.

3

rumbletummy t1_ja35lae wrote

You mean like CAD?

1

theinvolvement t1_ja4bihn wrote

I am not sure, what i'm thinking of is a gpu that can output a tiles of image data, and an outline that trims the image to a sharply defined shape.

so the monitor would receive an array of images tiled together, and instructions to trim the edges before displaying on screen.

its kind of a pipe dream i had since hearing about vector graphics video codecs last decade, and microleds a few years ago.

1