Can't believe iPhone X has a better and bright screen than our Note 8

iPhone x screen is much better then Note 8

In some cases it certainly is. For example, I was watching Stranger Things on Netflix in Dolby Vision HDR on the iPhone X and it absolutely blew me away. It's very dark so it has a lot of black/dark scenes and the iPhone's screen was perfect, inky black which made the contrast amazing. The Note 8 didn't do nearly as well when watching the same content. Although Samsung implemented a different HDR standard (HDR 10), I feel like its AMOLED screen should still be able to match the black levels of the iPhone X since Samsung made them both.

Also the color accuracy on the iPhone's display is fantastic. I truly thought that Note 8 display was the bee's knees until I got an iPhone X.
 
Haha. Yelp. Here in the near future we might need eye protective wear to look at phones lol. But seriously tho loving this note8:)

This is actually an important point.

One of the things photographers end up realizing is that:

The real world has an incredibly large dynamic range for brightnesses. Even within a typical scene upon which we might gaze in everyday life. The difference in brightness of a light colored object illuminated directly by the sun versus a darker object in a shadow in that same scene is tremendous.

Our eyes/brains do a good job of "faking us out" when we view such a scene, but it's a cheat. When we look at a scene, our eyes do not record that entire scene in one "snapshot" the way a camera must. And indeed, only a very small area within our vision has much resolution at all. The fovea of each eye has pretty darned good resolution, but outside of that area, it's pretty lame by camera standards.

So our brain constructs a virtual model of the scene for us as we scan around the scene, aiming the fovea of our eyes at areas where we want to capture greater detail. Other areas are "filled in" by our brains, but we haven't really captured high resolution information about most of most scenes we look at. It's largely an illusion created by our brain.

But more to the point of dynamic range. The same is true here, as well. Our eyes cannot capture a huge dynamic range all in one "snapshot", either. Instead, as our eyes scan around over a scene, they auto-adjust their apertures to gather good detail about the scene. They open up to gather more light as they look at a darker area in the scene, then stop-down to a smaller aperture as they look at brighter areas. All of this data is used by our brain to create a "virtual image" of the scene in our minds. So we think we're seeing a huge dynamic range all at once. But we really are not.

So back to photography and displays:

The real world has huge dynamic range - the difference between the brightest areas in a scene and the darkest.

As cameras evolve over the years, they can capture greater dynamic range with every sensor improvement. But they must do this in one "snapshot". They don't have the ability (yet) to cheat and combine a lot of individual "peeks" the way our eye/brain system does. Yes, we have so-called HDR imaging where several individual images, shot with different exposure settings are merged into one highly compressed (the opposite of HDR, actually) image. So this avenue is being explored.

But this brings us back to the real problem with photography (and cinematography): The display!

Our eye/brain simulates a huge dyamic range, and this is the standard against which we compare photography.

Our cameras are getting better at capturing a wide dynamic range in one exposure as sensors improve.

But the real bottleneck is how we display this huge dynamic range. As of now, we simply cannot!

Probably the best displays so far are the AMOLED types of electronic displays on TVs (like the two flagship LG OLED TVs). They're very impressive because blacks can be truly black when you switch off the individual LED pixels. As opposed to any LCD type display which can never truly block out the backlighting - only turn down the backlight in an area to simulate the effect. In any case, if you haven't looked at one of these new LG OLED TVs, you should. Pretty incredible - but not perfect. They claim "infinite contrast", but this is clearly hype. Still they're amazingly good!

Theaters must deal with the fact that the difference between the darkest black and the brightest white they can display on the screen all at once is limited by the film and its projector, limitations of digital projectors, and the maximum brightness the projector can produce minus the ambient lighting in a theater.

Photographic prints are even more limited in their dynamic range. The blackest blacks are a real limit, and prints have the lowest dynamic range of all of the display types.

And here's something to keep in mind, too: We don't really WANT to have too much dynamic range in a print or display system. Our eyes couldn't handle it "in one glance".

We'd find a too-bright light area very annoying while trying to observe the overall scene on a phone, TV, movie screen, or photo print. If a display could produce bright areas as bright as direct sunlight falling on a white object, and also produce blacks as dark as the deepest shadows we might want to peer into under some trees and bushes, looking at that scene would be annoying because our eyes would not be able to open up enough to let us see into those shadows.

So just as with audio, some compression of the dynamics is often beneficial.

I don't want my phone's display to be completely "true to nature". It would be extremely annoying, maybe dangerous to my eyes.

So it comes down to how they handle the compression of the dynamics so as to create a pleasant illusion.

In photography, I always think of what's happening for an individual photo I'm taking this way:

The scene has extremely high dynamic range.

When I shoot the photo, when I adjust the exposure settings, I'm choosing to capture a "slice" of the existing scene's dynamic range. I know I'm tossing away the darkest blacks and the brightest highlights. I make my adjustments to capture the "slice" out of the scene's DR that I feel contains the vital information for this image.

Next, I have to make further compromises when I decide how to process the image.

Will I be making a print? Under what sort of lighting will this print be viewed? Can I control that at all, or should I assume a wide range of possible lighting?

Will I be posting the image to a website that will be viewed on a wide range of typical computer displays? What color space will the people viewing it be using? What lighting will be present in the rooms where people are viewing these displays? What DR does the typical computer display have? How is it adjusted?

Yes, we want our displays to have good, accurate color, wide dynamic range, high resolution, etc. But as displays improve, the people producing the content are going to still be faced with how to process the images to be the best for the most people and their displays and their ambient viewing conditions, etc.

The display on my Note 8 is excellent. And even with my glasses off (I'm nearsighted, and see close-up things much better with my glasses off) and viewing as critically as I can, I'm not sure I can really appreciate the highest resolution setting. I need to play with it more, but I might back it off to the default middle setting if I decide that I can't really see much improvement, with my eyes, when using the finest resolution setting.

A good friend at work just got an iPhone X, and it does have a great-looking display. I haven't watched movies or done any critical image viewing on it, but it does look good. It may be that Apple's decision to go with larger individual pixels (lower DPI) was wise. It may be that the slightly larger individual LED sites gives the phone better dynamic range and/or better color resolution.

But I simply cannot complain, at this point, about the Note 8's display.

He uses his phone for everything. It is his TV, computer, music source, etc.

On the other hand, I use a lot of desktop PCs and still have "regular" TV at my house. So I watch movies on a TV, not so much on my phone. And I view and do editing of images on desktop PCs with their displays.

We were joking that we wondered if anyone would even really miss it if one of these "phones" didn't actually work as a telephone these days!

But getting off-topic, I have to say that the "phone" in this Note 8 works better than my previous cell phone, and better than a lot of cordless phones I use, too. It still doesn't match the quality of my old 1960s and 1970s wired analog telephones (remember those? A wired, dedicated phone?). The fact is that over a hundred years ago, the designers of "telephones" came up with very clever designs for the phones and the transmission system that are still better than our modern cell phones, cordless, or even cheesy modern wired phones.

But in any case, just realize that the display technology for our phones and other displays will be limited by what our eyes can handle "at a glance". And that means that we simply cannot tolerate too high of dynamic range.

For me, the reason I need high brightness in a phone or camera display is simply so I can see it when I'm using it in bright conditions. I hate not being able to read my phone's display, or use a camera well, when I am outside in bright conditions. So I'm thankful for high overall brightness at times. But not necessarily extreme dynamic range.

And back to the OP's comment: Eye protection really would be a concern if our displays truly could produce "lifelike" dynamic range and brightness! So do we really want or need that?
 
This is actually an important point.

One of the things photographers end up realizing is that:

The real world has an incredibly large dynamic range for brightnesses. Even within a typical scene upon which we might gaze in everyday life. The difference in brightness of a light colored object illuminated directly by the sun versus a darker object in a shadow in that same scene is tremendous.

Our eyes/brains do a good job of "faking us out" when we view such a scene, but it's a cheat. When we look at a scene, our eyes do not record that entire scene in one "snapshot" the way a camera must. And indeed, only a very small area within our vision has much resolution at all. The fovea of each eye has pretty darned good resolution, but outside of that area, it's pretty lame by camera standards.

So our brain constructs a virtual model of the scene for us as we scan around the scene, aiming the fovea of our eyes at areas where we want to capture greater detail. Other areas are "filled in" by our brains, but we haven't really captured high resolution information about most of most scenes we look at. It's largely an illusion created by our brain.

But more to the point of dynamic range. The same is true here, as well. Our eyes cannot capture a huge dynamic range all in one "snapshot", either. Instead, as our eyes scan around over a scene, they auto-adjust their apertures to gather good detail about the scene. They open up to gather more light as they look at a darker area in the scene, then stop-down to a smaller aperture as they look at brighter areas. All of this data is used by our brain to create a "virtual image" of the scene in our minds. So we think we're seeing a huge dynamic range all at once. But we really are not.

So back to photography and displays:

The real world has huge dynamic range - the difference between the brightest areas in a scene and the darkest.

As cameras evolve over the years, they can capture greater dynamic range with every sensor improvement. But they must do this in one "snapshot". They don't have the ability (yet) to cheat and combine a lot of individual "peeks" the way our eye/brain system does. Yes, we have so-called HDR imaging where several individual images, shot with different exposure settings are merged into one highly compressed (the opposite of HDR, actually) image. So this avenue is being explored.

But this brings us back to the real problem with photography (and cinematography): The display!

Our eye/brain simulates a huge dyamic range, and this is the standard against which we compare photography.

Our cameras are getting better at capturing a wide dynamic range in one exposure as sensors improve.

But the real bottleneck is how we display this huge dynamic range. As of now, we simply cannot!

Probably the best displays so far are the AMOLED types of electronic displays on TVs (like the two flagship LG OLED TVs). They're very impressive because blacks can be truly black when you switch off the individual LED pixels. As opposed to any LCD type display which can never truly block out the backlighting - only turn down the backlight in an area to simulate the effect. In any case, if you haven't looked at one of these new LG OLED TVs, you should. Pretty incredible - but not perfect. They claim "infinite contrast", but this is clearly hype. Still they're amazingly good!

Theaters must deal with the fact that the difference between the darkest black and the brightest white they can display on the screen all at once is limited by the film and its projector, limitations of digital projectors, and the maximum brightness the projector can produce minus the ambient lighting in a theater.

Photographic prints are even more limited in their dynamic range. The blackest blacks are a real limit, and prints have the lowest dynamic range of all of the display types.

And here's something to keep in mind, too: We don't really WANT to have too much dynamic range in a print or display system. Our eyes couldn't handle it "in one glance".

We'd find a too-bright light area very annoying while trying to observe the overall scene on a phone, TV, movie screen, or photo print. If a display could produce bright areas as bright as direct sunlight falling on a white object, and also produce blacks as dark as the deepest shadows we might want to peer into under some trees and bushes, looking at that scene would be annoying because our eyes would not be able to open up enough to let us see into those shadows.

So just as with audio, some compression of the dynamics is often beneficial.

I don't want my phone's display to be completely "true to nature". It would be extremely annoying, maybe dangerous to my eyes.

So it comes down to how they handle the compression of the dynamics so as to create a pleasant illusion.

In photography, I always think of what's happening for an individual photo I'm taking this way:

The scene has extremely high dynamic range.

When I shoot the photo, when I adjust the exposure settings, I'm choosing to capture a "slice" of the existing scene's dynamic range. I know I'm tossing away the darkest blacks and the brightest highlights. I make my adjustments to capture the "slice" out of the scene's DR that I feel contains the vital information for this image.

Next, I have to make further compromises when I decide how to process the image.

Will I be making a print? Under what sort of lighting will this print be viewed? Can I control that at all, or should I assume a wide range of possible lighting?

Will I be posting the image to a website that will be viewed on a wide range of typical computer displays? What color space will the people viewing it be using? What lighting will be present in the rooms where people are viewing these displays? What DR does the typical computer display have? How is it adjusted?

Yes, we want our displays to have good, accurate color, wide dynamic range, high resolution, etc. But as displays improve, the people producing the content are going to still be faced with how to process the images to be the best for the most people and their displays and their ambient viewing conditions, etc.

The display on my Note 8 is excellent. And even with my glasses off (I'm nearsighted, and see close-up things much better with my glasses off) and viewing as critically as I can, I'm not sure I can really appreciate the highest resolution setting. I need to play with it more, but I might back it off to the default middle setting if I decide that I can't really see much improvement, with my eyes, when using the finest resolution setting.

A good friend at work just got an iPhone X, and it does have a great-looking display. I haven't watched movies or done any critical image viewing on it, but it does look good. It may be that Apple's decision to go with larger individual pixels (lower DPI) was wise. It may be that the slightly larger individual LED sites gives the phone better dynamic range and/or better color resolution.

But I simply cannot complain, at this point, about the Note 8's display.

He uses his phone for everything. It is his TV, computer, music source, etc.

On the other hand, I use a lot of desktop PCs and still have "regular" TV at my house. So I watch movies on a TV, not so much on my phone. And I view and do editing of images on desktop PCs with their displays.

We were joking that we wondered if anyone would even really miss it if one of these "phones" didn't actually work as a telephone these days!

But getting off-topic, I have to say that the "phone" in this Note 8 works better than my previous cell phone, and better than a lot of cordless phones I use, too. It still doesn't match the quality of my old 1960s and 1970s wired analog telephones (remember those? A wired, dedicated phone?). The fact is that over a hundred years ago, the designers of "telephones" came up with very clever designs for the phones and the transmission system that are still better than our modern cell phones, cordless, or even cheesy modern wired phones.

But in any case, just realize that the display technology for our phones and other displays will be limited by what our eyes can handle "at a glance". And that means that we simply cannot tolerate too high of dynamic range.

For me, the reason I need high brightness in a phone or camera display is simply so I can see it when I'm using it in bright conditions. I hate not being able to read my phone's display, or use a camera well, when I am outside in bright conditions. So I'm thankful for high overall brightness at times. But not necessarily extreme dynamic range.

And back to the OP's comment: Eye protection really would be a concern if our displays truly could produce "lifelike" dynamic range and brightness! So do we really want or need that?

Alot of knowledge there my friend. I mean that in a good way of course lol:) very good points.
 
Alot of knowledge there my friend. I mean that in a good way of course lol:) very good points.

Thanks! I know that was extremely long for a forum post.

In fact, when I posted it, I got a message from the site saying that it wouldn't appear until it could be 'moderated'. I've never seen that before. I'm guessing it was because of it's excessive length. :)
 
Thanks! I know that was extremely long for a forum post.

In fact, when I posted it, I got a message from the site saying that it wouldn't appear until it could be 'moderated'. I've never seen that before. I'm guessing it was because of it's excessive length. :)

You're welcome and lol I got the same message when I quoted you for the long post. It's all good tho:) I've had that happen before when I quoted someone that had alot of links in their post. I think I was told it had something to do with the spam filter when I asked one of the mods.
 
This is actually an important point.

One of the things photographers end up realizing is that:

The real world has an incredibly large dynamic range for brightnesses. Even within a typical scene upon which we might gaze in everyday life. The difference in brightness of a light colored object illuminated directly by the sun versus a darker object in a shadow in that same scene is tremendous.

Our eyes/brains do a good job of "faking us out" when we view such a scene, but it's a cheat. When we look at a scene, our eyes do not record that entire scene in one "snapshot" the way a camera must. And indeed, only a very small area within our vision has much resolution at all. The fovea of each eye has pretty darned good resolution, but outside of that area, it's pretty lame by camera standards.

So our brain constructs a virtual model of the scene for us as we scan around the scene, aiming the fovea of our eyes at areas where we want to capture greater detail. Other areas are "filled in" by our brains, but we haven't really captured high resolution information about most of most scenes we look at. It's largely an illusion created by our brain.

But more to the point of dynamic range. The same is true here, as well. Our eyes cannot capture a huge dynamic range all in one "snapshot", either. Instead, as our eyes scan around over a scene, they auto-adjust their apertures to gather good detail about the scene. They open up to gather more light as they look at a darker area in the scene, then stop-down to a smaller aperture as they look at brighter areas. All of this data is used by our brain to create a "virtual image" of the scene in our minds. So we think we're seeing a huge dynamic range all at once. But we really are not.

So back to photography and displays:

The real world has huge dynamic range - the difference between the brightest areas in a scene and the darkest.

As cameras evolve over the years, they can capture greater dynamic range with every sensor improvement. But they must do this in one "snapshot". They don't have the ability (yet) to cheat and combine a lot of individual "peeks" the way our eye/brain system does. Yes, we have so-called HDR imaging where several individual images, shot with different exposure settings are merged into one highly compressed (the opposite of HDR, actually) image. So this avenue is being explored.

But this brings us back to the real problem with photography (and cinematography): The display!

Our eye/brain simulates a huge dyamic range, and this is the standard against which we compare photography.

Our cameras are getting better at capturing a wide dynamic range in one exposure as sensors improve.

But the real bottleneck is how we display this huge dynamic range. As of now, we simply cannot!

Probably the best displays so far are the AMOLED types of electronic displays on TVs (like the two flagship LG OLED TVs). They're very impressive because blacks can be truly black when you switch off the individual LED pixels. As opposed to any LCD type display which can never truly block out the backlighting - only turn down the backlight in an area to simulate the effect. In any case, if you haven't looked at one of these new LG OLED TVs, you should. Pretty incredible - but not perfect. They claim "infinite contrast", but this is clearly hype. Still they're amazingly good!

Theaters must deal with the fact that the difference between the darkest black and the brightest white they can display on the screen all at once is limited by the film and its projector, limitations of digital projectors, and the maximum brightness the projector can produce minus the ambient lighting in a theater.

Photographic prints are even more limited in their dynamic range. The blackest blacks are a real limit, and prints have the lowest dynamic range of all of the display types.

And here's something to keep in mind, too: We don't really WANT to have too much dynamic range in a print or display system. Our eyes couldn't handle it "in one glance".

We'd find a too-bright light area very annoying while trying to observe the overall scene on a phone, TV, movie screen, or photo print. If a display could produce bright areas as bright as direct sunlight falling on a white object, and also produce blacks as dark as the deepest shadows we might want to peer into under some trees and bushes, looking at that scene would be annoying because our eyes would not be able to open up enough to let us see into those shadows.

So just as with audio, some compression of the dynamics is often beneficial.

I don't want my phone's display to be completely "true to nature". It would be extremely annoying, maybe dangerous to my eyes.

So it comes down to how they handle the compression of the dynamics so as to create a pleasant illusion.

In photography, I always think of what's happening for an individual photo I'm taking this way:

The scene has extremely high dynamic range.

When I shoot the photo, when I adjust the exposure settings, I'm choosing to capture a "slice" of the existing scene's dynamic range. I know I'm tossing away the darkest blacks and the brightest highlights. I make my adjustments to capture the "slice" out of the scene's DR that I feel contains the vital information for this image.

Next, I have to make further compromises when I decide how to process the image.

Will I be making a print? Under what sort of lighting will this print be viewed? Can I control that at all, or should I assume a wide range of possible lighting?

Will I be posting the image to a website that will be viewed on a wide range of typical computer displays? What color space will the people viewing it be using? What lighting will be present in the rooms where people are viewing these displays? What DR does the typical computer display have? How is it adjusted?

Yes, we want our displays to have good, accurate color, wide dynamic range, high resolution, etc. But as displays improve, the people producing the content are going to still be faced with how to process the images to be the best for the most people and their displays and their ambient viewing conditions, etc.

The display on my Note 8 is excellent. And even with my glasses off (I'm nearsighted, and see close-up things much better with my glasses off) and viewing as critically as I can, I'm not sure I can really appreciate the highest resolution setting. I need to play with it more, but I might back it off to the default middle setting if I decide that I can't really see much improvement, with my eyes, when using the finest resolution setting.
......

I didn't quote your entire post but I enjoyed it all. Thanks!
 
I have a couple iPhone Xes and am considering adding a Note 8 to the fold. The iPhone’s screen is fine and I’ve read that it’s accurate. The Note 8’s screen is fun. ...and FUN is a big deal in my book. Every time I look at a Note 8 screen I think, “Wow girl, that’s gorgeous.” I love the saturated colours.
 

Forum statistics

Threads
955,174
Messages
6,963,985
Members
3,163,210
Latest member
PaulWaro