Prestigious_Carpet29
Prestigious_Carpet29 t1_jaf0klb wrote
ELI5 + "University project"
Really?
Anyhow, the keywords you need are Refraction https://en.wikipedia.org/wiki/Refraction for why light bends when it enters glass or water etc.,
and Dispersion https://brilliant.org/wiki/dispersion-and-scattering-of-light/
Dispersion is the phenomenon that refractive index isn't actually a constant for a material, but is slightly dependent on the wavelength of the light. This is why prisms split up white light into a "rainbow" spectrum, and why lenses suffer chromatic aberration (unwanted different focus for different colours/and coloured edges on things).
Prestigious_Carpet29 t1_jaev1a6 wrote
Reply to ELI5: How does an iPhone detect if charging cords are “made for iPhone” certified? by DPRobert
I don't know about modern charging cords, but not very long ago some Apple USB/charging cables had slightly "non-standard" value termination/load resistors (passive components costing less than a cent) in them, which meant the Apple device could tell whether it was an "Apple" cable or a generic standard one...
Prestigious_Carpet29 t1_j78cegk wrote
Reply to comment by DatsunL6 in Does the central part of my vision see in a different frame rate than the outer part? by Calvinkelly
Not quite true. The true blind-spot is a little off-axis from the centre of vision.
What you describe is the effect that dimmer stars may seem to "disappear" when you look straight at them because the centre of vision, the fovea (while having higher resolution and colour) is less light-sensitive.
Prestigious_Carpet29 t1_j6zpopw wrote
If your string was stretched between two anchors on a solid piece of metal bar, such that the string was vibrating, but nothing else was acting as a sounding board... and then you took the construction into an anechoic room and plucked the string you might find that the sound is directional - though the direction would be quite broad, you might notice "nulls" quiet directions when the vibration was in a tangential (sideways) as opposed to back-and-forth towards you.
Although there might still be enough turbulence around the string for the sound to be radiated in most directions. A vibrating ribbon, rather than a string may demonstrate the effect better.
In a normal (non-anechoic) room you get enough sound bouncing chaotically off the walls/floor/ceiling, you won't really detect "quiet" directions from a sound source.
Changing subject slightly, if you have a tuning fork, the direction of the sound from that is typically somewhat directional I recall.
Prestigious_Carpet29 t1_j6ke26i wrote
Reply to comment by Prestigious_Carpet29 in Philips to cut 13% of jobs in safety and profitability drive by 4Wf2n5
They've been increasingly just trying to jump on bandwagons, rather than seek long-term innovation.
Also >15 years ago they said their target market was over-50's.
Will their target-market eventually die out?!
Prestigious_Carpet29 t1_j6jk01l wrote
Having worked for Philips in the past, I think it's clear they have been in "managed decline" for at least two decades, probably at least three.
They've reduced the breath of what they do, and sold off ever more pieces (NXP, lighting...) ...
Prestigious_Carpet29 t1_j68qfaf wrote
Reply to comment by dmmaus in How close does one need to bring two coloured lights together to perceive a compound colour effect? by romxza
Yes.
See metamerism
This is also why paint-matching can be a huge problem. You can get two paints that look the same colour under one lightsource (eg. daylight) but are visibly different under a different source (e.g. fluorescent, or sodium streetlights)
Prestigious_Carpet29 t1_j5qap0a wrote
Reply to Why does hot air cool? by AspGuy25
As others have said, the metal part isn't cooler - it's an artifact of the thermal camera.
It comes down to emissivity, and reflections.
A "thermal camera" or "(non-contact) IR thermometer" measures the radiated heat (long-wavelength thermal energy) emitted by the object you're pointing it at.
All objects emit the same spectrum (strictly spectral distribution) of radiation, depending only on their temperature - look up Blackbody radiator and colour-temperature.
The absolute amount of energy ("brightness") also varies strongly with temperature, but depends also on a property of the material, known as its emissivity.
For simplicity, IR cameras typically only measure the strength of emission at one wavelength (usually somewhere around 3-12µm), and determine the temperature by the "brightness" at that wavelength.
For most common matt/dielectric (non-metal) materials, the emissivity is 97-99% - which is the default calibration for an IR thermometer.... but metals, especially polished shiny ones, and especially gold has a lower emissivity, so the IR thermometer will under-read.
You can look up the emissivity for different materials and set the IR thermometer calibration accordingly, to get correct readings ... but be aware that metals can also look like mirrors, and you may "read" the temperature of the thing in the reflection in the metal, rather than the metal itself - or somewhere between the two.
If you wear a metal ring (especially a gold one) on your finger an point the thermal camera at your hand, you'll see the ring is darker (and reads "colder") - even though its true temperature is likely to be close to that of your fingers.
This is something that not enough people know about IR thermometers and IR cameras.
It's a physics thing! :-)
Prestigious_Carpet29 t1_j5ftbtr wrote
Reply to comment by nosnowtho in How do phased array antennas receive signals? by nosnowtho
(Digital) Signal-processing is a very broad field, but very powerful and important in modern communications systems.
As examples, you get audio signal-processing for lossy compression (bit-rate-reduction) and echo-cancellation and speech-recognition, and signal processing of radiofrequency signals in any "digital"-mode transmitter or receiver such as mobile phone or DAB radio or digital-TV (take a deep breath and look up OFDM :-) ).
It's amazing how Fourier transforms invented (or perhaps "discovered") by Joseph Fourier 200 years ago are at the heart of so much of modern technology.
Prestigious_Carpet29 t1_j5frb0q wrote
Reply to Is the whole you’re not fully developed until you’re 25 thing true or is it exaggerated because I keep hearing a lot of conflicting things about it? by BlackCat0110
From a general science perspective (I have no specialist human-body expertise) I would suggest that it's arguable that development is asymptotic - the rate-of-development or mental-maturing slows down as you approach the end-point, but never quite gets there.
I would interpret any figure (e.g. age 25) for being "fully developed" as an indicative ballpark rather than a discrete finishing line that you cross.
My understanding (non-expert) is that the brain is constantly developing through life, making new connections, letting old ones weaken...
The general public, the popular press, and the legal-folks crave certainty and absolutes. In real life, things are generally a bit fuzzier!
​
In physics and electronics we often measure or define when a switching signal gets to 90% or 95% of its final value, as defining when you reach 100% is really difficult (and not practically useful).
Prestigious_Carpet29 t1_j5eylrj wrote
The simple answer is that a suitably-designed phased array of the same size (same cross-sectional area) as an equivalent dish would be expected to perform approximately equally (in terms of directivity and signal-strength). On those basic measures, it can't really perform better than a dish.
In a receiver, the signals from the individual array elements are electronically summed (at some stage of the signal processing) with subtle time-delays between the elements in order to "phase" the array and create the required directivity. Signals from the wanted direction will sum constructively, while signals from other directions will tend to be non-coherent (at random phases) from the different received elements, and thus average down to a proportionately lower level in the summation.
The key advantage of phased-arrays is that the beam-direction can be "electronically" steered (i.e. by changing timings in the signal-processing), as opposed to having to physically move a dish. The electronic beam-steering can be essentially instantaneous, whereas the rate at which you can move a dish on motors is limited by physical mass, inertia, motor-power, ... and will be subject to mechanical wear. This high-speed steering is near-essential for tracking low-earth satellites, or military radar, among other applications.
There may then be second-order benefits such as physical simplicity, lighter-weight, less wind-loading etc. But if you need super-speed scanning, then moving a physical dish of more than a certain size is simply impractical. Very large dishes are major engineering projects, as the dish needs to retain it's shape to within a fraction of a wavelength (say 1/10th of a wavelength) as the dish is moved and steered. In contrast, a phased array can be fitted on a flat (or even uneven surface/terrain) and "electronically" flattened (corrected for physical distortions).
A further benefit of phased arrays (for relevant applications) is that you can double up (or triple, or...) in your signal-processing, and then receive from two or more directions simultaneously with the same physical antenna. That's something you simply cannot do with a dish. Again useful for Starlink-type applications where you have multiple low-earth satellites, or military radar when you want to track multiple fast-moving targets.
If you have a phased array with a lot of elements then you may be able to control side-lobes better than with a dish, which may be important if you not only want to maximise the signal strength of a wanted signal, but also reject or suppress an unwanted signal of the same frequency but coming from a different physical direction.
Phased array antenna with many elements are likely to be more expensive than dishes, often considerably so, although the cost of RF electronics is continually falling.
(I don't have any first-hand experience with phased-array antenna, but have a lot of experience with signal-processing in other applications, where the underlying reasoning would be similar)
Prestigious_Carpet29 t1_j52tw21 wrote
Reply to comment by JensenWench in Researchers at UC San Diego found that a single 20 minute session under a UV-light nail polish dryer results in cell death for 20-30% of exposed cells. The UV emissions also damaged DNA and caused cellular mutations. by bog_witch
I'm a guy, and am not into fancy nails at all. Also a physicist - gut response is I can't imagine why you'd want to use UV light for that for more than 30 seconds!
(And/or if it really takes 20 minutes of UV there's probably a better way of achieving the same end)
Prestigious_Carpet29 t1_j4x9srl wrote
Because of the tilted axis of the earth, the sun only rises due east, and sets due west, at the spring and autumn equinoxes (around 21st March and 21st September).
The deviation from due-east / due-west at other times of the year is least near the equator and "beyond extreme" ;-) when you go beyond the arctic or antarctic circles.
https://www.timeanddate.com/astronomy/uk
Will show you lots of interesting information about the time of sunrise and sunset, and what azimuth the sun rises or sets at, for different places at different times of the year.
The link above takes you to a UK page, but you can set it to any country.
Prestigious_Carpet29 t1_j4t28de wrote
Reply to comment by Lord_Gadget in If the left side of your brain controls the right side of your body and vice versa, then what does that mean about people who are left handed? by Reflector368
This question is about the left side of your brain controls the right side of your body and vice versa - which I believe is evidenced from cases where people have brain-injury on one side, and lose control of the other side of their body.
This is completely different to the bogus concept of "right-brained and left-brained people" where one dominant side is supposedly more creative and the other more analytical - which at best is just a lazy psychological metaphor.
Prestigious_Carpet29 t1_j4t0cjk wrote
Reply to What exactly is the process when someone "trains" an AI to learn or do something? by kindofaboveaverage
"AI" and "machine learning" tend to be used interchangeably, especially in mass-media articles. In theory "AI" is more "intelligent" but ... well.
Anyway in a previous job I worked on a "machine learning" project which used a "binary classifier" (a relatively simple machine-learning method) to determine whether a short sound recording was "baby" or "not baby".
To train it, we had a whole load of sound recordings (.wav files), of domestic "non-baby" sounds, like hand-washing dishes, washing machine, vacuum cleaner, TV etc. And a load of "baby" sounds, which included babies babbling as well as crying. The "training" comprised getting the program to analyse those sounds (from the two labelled categories) and "learn" how to classify them. Set the computer-program running, and wait for an hour or two...
As with much audio-processing (including speech recognition), the sounds were analysed in short pieces lasting a few 10's milliseconds each, each characterised with about 20-30 parameters relating to the frequency-content and rate-of-change with time. In this case the "training" was essentially fitting a plane through a 20-30 dimensional graph of those parameters, splitting the set into "baby" on one side and "non-baby" on the other. Once trained, you could then give the algorithm new recordings that it hadn't "heard" before, and it would classify them accordingly.
A problem (symptomatic of many machine learning methods) was that if you presented it with a recording of a baby but with some other sound in the background - even just a cooker-hood fan, that it hadn't been trained for - it would fail to recognise the baby.
There is an ever-present danger with AI/ML systems that if you haven't included all possible confounding factors in the training data, they may completely and unexpectedly fail to work properly when that factor pops up in the real world.
Prestigious_Carpet29 t1_j4slp11 wrote
Also note that far more things "are known to cause cancer in California" than anywhere else!
For various interesting/historic reasons, California has exceptionally tight environmental restrictions (for pollutants etc), and one of the lowest thresholds for labeling products as "suspected" or "known" to cause cancer.
I'm in no position to judge whether California is "reasonable" or "over cautious", but for people who want to stir up panic, they'll always cite contamination levels (e.g. in water) compared to California's limits!
Prestigious_Carpet29 t1_j36fn03 wrote
I have a broad knowledge of physics and engineering, but pressing/stamping is not my particular area of expertise.
The Wikipedia image linked by HankScorpi-vs-the-World indicates feature sizes almost as small as 100nm for Blu-ray (that's maybe a factor of 2- or 3 smaller than I was expecting, but may well be correct).
I suspect the problem of how small features can be pressed comes down to economics: I expect the metal stamping tool will wear with use, so that as the feature sizes get smaller probably you get get fewer good-quality stampings out of it before you need a new tool.
Although progress is being made in making LEDs deeper into the ultraviolet, given that once you get into the UVB range the light is strongly absorbed by plastic lenses and discs, I can't imagine this technology has a whole lot further to run anyway.
Prestigious_Carpet29 t1_j2w5jt0 wrote
Reply to comment by hmartin430 in How close does one need to bring two coloured lights together to perceive a compound colour effect? by romxza
In response to hmartin430, my expertise is really in the use and application of CIE colour matching and display screen technologies, rather than the actual structure of the human eye.
My understanding too is that the rods are sensitive to low light (and saturate at higher light levels).
If we just consider the cones in the fovea, that comprises three types of cone: L,M,S (long, medium, short wavelength), which are very loosely red,green,blue. They are actually much broader bandwidth, with highly overlapping wavelength sensitivities than true RGB. The CIE colour matching functions (and resulting "chromaticity" coordinates) X,Y,Z are mathematically related to the L,M,S cone spectral sensitivities but are not quite the same thing (it's a long story...). The XYZ colour-matching functions are 'mathematically fudged' slightly such that the Y-coordinate represents luma (brightness) as well as (sort of) "green".
Grappling slightly for a consistent solution to all these things, I believe the answer is that in the fovea there is a highest density of M-cones, fewer L-cones, and fewer S-cones still. This means that our "luma" resolution is highest, red-green resolution is somewhat lower lower, and blue-yellow resolution the lowest. (In practice you need to match the luma (brightness) of the coloured test-stimuli to really demonstrate this effect, otherwise if "yellow" is much brighter than your "blue" it may be resolved in luma even if it isn't really resolved in chroma).
Again from a technological perspective, the Bayer colour filter array pattern used in the vast majority of electronic colour-camera sensors has twice as many green pixels as blue and red, which again maps to approaching human-eye properties to get the "best" visual image from finite technical resources.https://en.wikipedia.org/wiki/Bayer_filter
Prestigious_Carpet29 t1_j2ups0a wrote
Plenty of other good answers.
In addition to the comments about the air effectively being "clearer" at low temperatures etc, there is a known visual or psychovisual phenomena where higher-contrast images or scenes are perceived as being sharper (this is exploited by people trying to sell you new TVs etc).
"Mucky" air will decrease the contrast (as well as perhaps physically blurring) which will make the scene "pop" less. As others have said, the angle of the sun can also dramatically affect the scene contrast.
Related to this, I find general urban street scenes "pop" in the sunshine shortly after rain - the rain clears the air, washes away dust, and if surfaces are wet and shiny the contrast is much higher - it can look "hyper-real".
Prestigious_Carpet29 t1_j2uolrk wrote
Reply to How close does one need to bring two coloured lights together to perceive a compound colour effect? by romxza
I was going to say, an LCD (or other computer/TV) display screen is the classic example of where colours in close proximity (RGB sub-pixels) are perceived as the compound colour.
Basically I second kilotesla's answer, but will add some additional clarification and related ideas.
The "resolution" of the human eye is highest in the very central few degrees of vision, the "fovea", which is populated with "cone" cells which are colour-sensitive. The rest of the visual field is mostly populated by rod cells, which are only brightness-sensitive - but are sensitive to lower light-levels. See https://www.cis.rit.edu/people/faculty/montag/vandplite/pages/chap_9/ch9p1.html#:~:text=Rods%20are%20responsible%20for%20vision,is%20populated%20exclusively%20by%20cones.
Empirically (presumably in the fovea) the human visual system has a higher "resolution" for brightness or "luminance" than for colour - this has been exploited for decades in the way analogue colour television or JPEG images are encoded - with the colour being coded ("sub-sampled") at a lower resolution than the brightness (to reduce the information), with little visual perceptual loss.
Empirically the black-and-white resolution of the eye is of the order of 300dpi at about 14 inches for someone with good vision who can focus properly, in high-ambient light levels. You could probably find a reference to the definition of "20/20 vision" and get a comparable angle subtended. In very low light levels, the effective resolving power will be lower.
Combining those two observations, I would expect the colour resolution to be something like 80-150dpi at 14 inches. This is equivalent to a subtended angle of around 1/(100*14) radians, so (180/pi)/(100*14) = 0.04 degrees, give or take.
If the coloured lines or stripes are closer than that sort of subtended angle, the colours are likely to merge into one - they will not be "resolved". The merging will work a bit better where the pattern is alternating (like in a TV or computer screen), rather than just a single source of each colour - in the latter case you may still perceive a coloured "fringe" on each side, even when you can't properly resolve the two colours.
In the early days of LCD computer screens, in the early 2000's, when they were only 1024x768 resolution, and before the days of sub-pixel font rendering, if you had white text on a black background, where the letters were only 1 pixel wide, the text often appeared to have some chromatic aberration, an orange-tinge on the left of the letters and a bluey tinge on the right, just because of the subpixel layout. As displays became higher resolution, and font often rendered more than 1 pixel wide, and/or they used more-clever sub-pixel rendering techniques (such as Microsoft's ClearType) these effects largely became consigned to history.
See also https://en.wikipedia.org/wiki/Subpixel_rendering#:~:text=Subpixel%20rendering%20is%20a%20way,the%20screen%20type's%20physical%20properties.
Don't take my word for it... you could print a piece of paper with fine alternating black and white lines and establish at what distance the lines cease to be resolved and "go shimmery" and then merge into grey - to get a gauge of your own personal black and white resolution.
If you can find an old Trinitron cathode-ray tube (this uses RGB lines of phosphor) - which is lower resolution than modern displays, you could try the same thing - look closely, then move back until the colours merge. If you can determine the pitch of the stripes and the distance, you can work out the subtended-angle when the colours merge.
Perhaps more easily, you could create a graphic of red/green/blue stripes on your computer (make each line several pixels wide) then see how far away you need to be for the lines to merge and it looks white.
The results are likely to be slightly different (fuse at a slightly closer distance) if you match the luminance of the coloured stripes (have blue at full brightness, red somewhat less, and green lower still), making probably a bluey-lilac colour when merged.
If you do some Google searches (other search engines are available) relating to measuring the resolving power of optical systems, contrast ratio etc, this will get you a sense of the underlying physics, which is then largely applicable to the eye - for the purposes of the question.
(I'm a physicist/electronics engineer, who has also spent several years of my professional life in optics, imaging systems, colour-reproduction and display-screen technology.)
Prestigious_Carpet29 t1_jdqgi9j wrote
Reply to comment by ch1214ch in How do the two eyes see in registration with one another? by ch1214ch
I don't know about how the brain is wired, but from a simple optics/geometry perspective, I think we can reason that your "tied 1:1 ..." suggestion is unlikely.
In any given scene, the two eyes don't see exactly the same thing, owing to the different viewpoints. We experience "stereo-disparity", and the principal effect of that is that the relative horizontal alignment (in the two eye) of different points in the scene depends on their depth.
I would argue (I can't prove) that we perceive a range of depths "instantaneously" without having to scan the eye-divergence to bring each conceivable depth into alignment (to meet some 1:1 mapping).
Similarly, if you were to look off-axis (like 30 degrees to the left or right) at something quite close (e.g. 20 cm away), the images will be noticeably different sizes on the two retinas (provable from basic geometry), so again a "1:1 mapping" isn't helpful - and in reality we can still fuse a 3D image in the brain.
I've spent a lot of time in the past creating 3D autostereograms and thinking about stereoscopic depth perception - and depth reconstruction from an image-pair. It's not trivial.
At some level the brain must be 'correlating' the two images with a range of possible horizontal-offsets (dependent on relative depth), and some small finite vertical tolerance too (to allow for optical distortions and misalignments). I think I read about tests (or maybe did my own tests 20+ years ago) showing that the human brain can stereo-fuse (and perceive different depths) even if the image presented to the left and right differ in size/magnification by up to about 10%.
Also this video is quite interesting https://www.youtube.com/watch?v=DkaJ6iK2CJc The ability to barrel-roll the eye (to a limited extent) is likely part of human "optical image stabilisation" !