Neuroimaging Methods

Neuroscience is a new field driven by technological advances that are of mutual importance to the medical, health and science technology sectors and they are impacting on humanity in nearly every way imaginable.


As scientific knowledge becomes increasingly accessible, closer scrutiny of techniques allows researchers to clarify and validate the quality of research used to inform legislation and medical best practice. It is an inevitability of the scientific process that subsequent research may negate historical precedent, and it turns out that numerous domains of neuroscience are marred by publication bias – especially true for neuroimaging of complex psychological conditions.

To explain this, let’s look at some early computational theories of the brain and then see how similar concepts are utilized within neuroimaging software used to quantify data. With this information we can explain where previous research may have misrepresented findings, also how better methods can be applied moving forward.

An influential model is the Sparse Distributed Memory, where brain activity is represented topologically, similar to a 2 dimensional checker board, where each grid square represents neurons in an area that either have or haven’t fired, rather like an I/O binary switch where black indicates some minimal threshold signal, I: this may seem simplistic, but it turns out to be rather accurate – and ‘bottom-up’ visual processing is in fact topological as light enters the retina, through to the optic fibers feeding information forward to higher cognitive processes:

Sparse Memory..jpg

In this example: a sparse matrix where 20% of the pattern is a random noise and the signal can still be inferred through some residual error.

On the other hand, given sufficient noise a true signal may be mis-inferred or reconstructed to other mental representations:

Images: Rogers, 1988. See also Pentti Kanerva, 1988

Images: Rogers, 1988. See also Pentti Kanerva, 1988

These concepts are also compatible to computational language theory, eg TRACE word retrieval, priming and Chomskyan Language Acquisition, highly influential to the fields of psychology and neuroscience, simplified into graphemes or phonemes for example:

Adaptive Switching Circuits_Early Training Algorithms.png

Image: Widrow & Hoff, 1960


Signal:Noise signal ratio is the essential concept for statistical analysis of neurological processes defined by the simple equation:

 

Outcome = Model + Error

 

Furthermore dopaminergic processes which are central to psychological states and addictive systems can be derived from this formula and simplified into the Reward Prediction Error Model:

λ−V

 

Dopamine response = reward occurred – reward predicted.[1]


[1] Common Addictive substances including alcohol are described by the Rescorla-Wagner Learning Rule for Pavlovian Conditioning (Rescorla & Wagner, 1972). Where Lamda is the maximum conditioned limit of learning and V represents changes in associative strength. Key to this aspect of conditioned learning is that it is not due to the co-occurrence of the Conditioned Stimulus to the Unconditioned Stimulus, but due to the unanticipated strength of the occurrence which indicated sensitivity of the (dopamine) receptors. It is the error in prediction that results in a stronger neural response, followed by neural strengthening.

 

The same is true for colour processing and digital neuroimaging, following the biological limits of human visual processing and thresholds for Just Noticeable Difference according to scale and differentiable stimuli (i.e. color wavelength).  

Colour Noise

Fourier Analysis is one method for separating wavelengths of varying frequencies, similar to how a signal or color is proverbially extracted from random noise – like static streaming on a TV that is turned on.

Imaging software can produce the same outcome by defining each point in the colour space with a number that can be defined within a range or distance to other colors within 3-dimensional vector space. Following on from that, a value can be defined from an image within a certain range, such as Blue RGB (R:18, G:78, B:230) ± 5, 10 or 15 pixels for example.

Delta - Just Noticeable Difference.jpg


Why does this matter? Well, there are a few reasons.


First of all, it demonstrates the limits of human processing that necessitate algorithmic training and digital processing used on images to best infer findings in the first place.

Secondly, because spatial mappings of colors in vector space allow us to code The data in various forms such as linear matrices, to better represent complex patterns that could reveal latent morphological trends hidden in what otherwise appears as a slight, visual randomness.

It is furthermore important for applying threshold limits correctly to the data, rather than according to your subjective aesthetic – which is probably even highly prone to bias with respect to how biased they are considered to be!

Now let’s consider this information in the context of some research conducted on rats last year, where neural tissue was tagged with a marker to identify activation patterns following exposure to alcohol:

NAcSH.jpg

The dark rectangle in the image above is the Nucleus Accumbens Shell, NAcSH region where microphotographs were taken.

(Magnify with mouse)

Above this, an algorithm of uniform distributed noise has been overlayed at increasingly high values in order to determine what the best threshold should be for defining constituted selectable pixel area. Extraneous noise impacts spatial distributions of colourspace. It follows from that, even slight additions of noise can add a considerable falsification to the spatial distribution.

Looking more closely at this now, it is apparent that raw images are needed and they should be taken in a standardised format in order to make inferences.

(Magnify with mouse)

Image top left shows raw microscope file, top right with contrast filter, polarizing extreme values. Bottom left shows thresholding algorithm picking up on tagged nuclei, bright red. Bottom right shows defined regions are ranked/ordered by number.


In this instance, neural counting is based on a defined parameter of what is considered dark pixel space to define and tag a boundary area. Therefore tagging can be arbitrary according to both how closely defined the space must adhere to sphericity as well as what is the limit of diameter in the spherical space. The lower right image identified elements are tagged with a numerical code that can be extracted along with its coordinates and pixel properties into a spreadsheet, where presumably they can be used, in more comprehensive calculated procedures.

Therefore thresholding is not arbitrary to the process of neural analysis and can in fact alter data findings if not adequately defined to prevent image extortion.  



Consider for example if photos varied in lighting and contrast due to inevitable smudging on the slides, they would need to be adjusted to be standardized, and some images would have more data clipped than others would.

Ultimately, this means that findings can turn out to be ‘significant’ or ‘not significant’ depending on what level the threshold has been set to.

Neural Data Results Graphs.png

Data trends can be sufficiently and systematically skewed from these sorts of applications of included-excluded parameters, for example if they coincide with particular regions of interest in the brain and reverse inferences (logical fallacy) are concluded from that, which turns out to happen often in medical neuroimaging publications.

So. Looking at this again more closely, an approach could be as follows:

In an instance such as this where regions within a neuroimage of sorts are being quantified according to a thresholding parameter based within a colorspace, the correct application should first be determined. While manual threshold can be defined based on a test run and then set across all images in a set, it is likely that Automated thresholding is preferable for a number of reasons: Automated thresholds are based on predefined algorithms that have been formulated based on some meta-analysis of pixel and texture properties for the images in question. They are therefore also standardised and may be better compared with other findings and are suitable for meta-analysis where best-practise method has been agreed upon.

Following that rationale, the optimal Automated threshold can be chosen according to some standards such as how a background is classified, and whether a local or global thresholding technique has been defined in that technique. Background can be subject to clamping effects too, where a given range of tones were expressed but get totally blocked out.  

A reductive but accurate explanation of pixel sampling is best described as a selection process where the local mean intensity of neighboring pixel boundaries for binary greyscale mapping based are derived based on a lowest-order sample value. Nearest Neighbour Sampling can be thought of as a process of pixel sampling based on inferences of proximity, where the process can be used for reductive purposes – downsampling, or for low-fidelity image purposes, or to ‘smooth out’ degraded images through extrapolating in more pixels to construct a high-fidelity image.

Nearest Neighbor Interpolation.jpg

Knowing this, a number of Auto thresholds can be tested first, to asses which will have the closest values and smallest standard deviation to an optimal – but not standardised manual threshold check. Essentially, the algorithms tag neighbouring pixels according to slightly different patterns and degrees of freedom, where some will be more optimal for blocking out the background.

High fidelity images are therefore important because they have a greater bit-depth range, e.g. 8-bit (28 = 256) pixel depth range and medical imaging applications often require over 10 or 12-bit sampling as a standard to reduce rounding errors in computations and extraneous compression factors, that may possibly misrepresent results.

Bit Range.jpg

Such images could be reprocessed as 16-bit (816 = 65,536) pixel depth range to ensure that no algorithmic misreadings occurred, as it can’t be precluded that
thresholding algorithms could vary accordingly by automated counts.

Based on this understanding of preliminary image analysis, we may look more closely at a range of statistical possibilities for data inferences. For example to examine classified pixel groups, or for higher dimensional clustering and pattern regressions in ‘broad spectrum’ pattern data sets where there may be latent subgroupings. Shape analysis can be defined more clearly in the colorspace to compare topological differences between groups such as age and sex, where again, looking for patterns.


Spectral composition is especially important in ‘noisy signal images’ where some Tones are algorithmically excluded but may have been legitimate and therefore reduced the total – and relative count of included elements.

A final point of interest would be to consider Principal Component Analysis in the context of neuroimaging, given the points already made. It may be the case that factor groupings of various elements may vary or overlap depending on how thresholding has saturated pixels, where it is possible that some ranges or texture components are not uniformly classified.

Current research: The interest for all of this is to apply best practise imaging analysis and digital accessibility standards to the field of neuroscience and digital neuroimaging within the broader context of scientific accessibility and veracity.

This is especially important in such a burgeoning field where various conflicts of findings and methodologies are expected and clear communication of theory and outcomes are indispensable . The applications discussed here extend further into human neuroimaging applications like MRI, CAT and PET imaging, emphasising that simple oversights can turn out to be nontrivial Visual science in all of its forms are perhaps the most accessible medium for conveying information succinctly, underscoring emphasis to patients seeking insight for their various states and neurological conditions.

Finally, the outcome from all of this would be to achieve reliable, valid & exacting findings — as well as certified top cream science.


References


Kanerva, P. (1988). Sparse Distributed Memory. MIT Press.

Rogers, D. (1988) Kanverva's Sparse Distributed Memory: An Associative Memory Algorithm well-suited to the Connection Machine. RIACS Technical Report 88.32, Research Institute for Advanced Computer Science NASA Ames Research Center.

Widrow, B., Hoff, M.E. (1960). Adaptive Switching Circuits. Office of Naval Research Centre: Stanford Electronics Laboratories. Technical Report No. 1553-1.

Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the associability of stimuli with reinforcement. Classical conditioning: II. Current theory and research, 64-99.

Infographics Crash Course

Last year I had quite a few student seeking Infographics training for their work. I’m new to this area of design myself, but I’ve learned on the job and I thought I’d share some tips while I’m taking a break from teaching this year during Covid-19.

Here’s a poster I put together for some of my own research last year — not free from mistakes or technical errors as you can see, but an example nonetheless of the applications of technical illustration.

This is actually an excellent area of design to get started in if you’re new to digital art — you can work mostly with templates and a mouse without needing many additional drawing tools or skills.

Therefore without further ado, here are some
Beginner’s Tips for Infographic Design.

10 Tips for Infographic Design

1. Use vector graphics!

Above image: a partially degraded rat, left showing a raster image (pixel-based, or ‘resolution-dependent’) and should be displayed at one size only. Vector graphics are scalable, a must for designing infographics. It will save you a lot of time and…

Above image: a partially degraded rat, left showing a raster image (pixel-based, or ‘resolution-dependent’) and should be displayed at one size only. Vector graphics are scalable, a must for designing infographics. It will save you a lot of time and trouble. Trust me.

Hot design tip: if you are going to steal a design by taking a screen shot of something you liked online but didn’t want to pay for, at least trace over the image with a vector pen tool, or convert it with the Trace Tool (e.g. in Adobe Illustrator),…

Hot design tip: if you are going to steal a design by taking a screen shot of something you liked online but didn’t want to pay for, at least trace over the image with a vector pen tool, or convert it with the Trace Tool (e.g. in Adobe Illustrator), so that the edges are sharp! This will save you from the dreaded anti-aliased edge which leaves a ring of pixels around your bitmap shape outline when you try to fill it in with colour (in other words, don’t rip off graphics).

Rat Brain.jpg

You can trace over colours for emphasis too.

2. Make Simple Icons and Avatars

You can draw your own images by hand and convert them to traced scans that you can use as logos or icons. The best method is usually to map them with 2 or 3 colours only, then alter the colours as you go.

For photos, polarise the tonal values first for a high contrast image.

This works great for making face avatars of your friends/coworkers, too.

Blue Cat Avatar

3. Subscribe to a Stock Graphics Library

Instantly download a range of templates that will do most of the design work for you. Stock Adobe is excellent: www.stock.adobe.com

Stock.Adobe.com

4. Make a Colour Palette

Find your affiliation’s official colours ( they should have RGB / CMYK codes listed in a portfolio book for their designers to refer to — if they don’t, tell them to ) and make yourself a colour palette:

Take 3 colours and make 3 tint swatches of each of them. So, if you have a protected logo colour such as ‘Cerulean Blue’ at 100% tonal value, reduce it to an 80% tint, a 60% tint and a 20% tint for example. Make sure the graded tint values are the same for each colour.

[I put a warming filter over these, to make the range a bit more complex].

tint swatches.jpg

Main Colours (Top Row)

Second Row: 80%
Third Row : 60%
Fourth Row : 20%

Et Voila!



You have a design palette to work with that complies with guidelines and you can now justify your colour matches with logic.

It’s easy then to match colours by their percentage value when designing your layouts and you’ll never have to do colour guess work again.



5. Make your Design Accessible



There is an international web consortium that decides the global accessibility standards (WCAG)*. Not only is this information important ethically and legally (particluarly if you work for a public or government organisation that must comply), but it also turns out that ‘good design’ tends to correspond with ‘ethical design’.



Example:

Consider that a significant number of people (mostly men) are partially or completely colourblind. This means many people can see a range of colours but some values blend into one another. Therefore it is not best design practise to pair similar colours (like turquoise and light blue) in overlapping imagery, as some individuals actually can’t see the content. Contrast ratio guidelines have been designated, simplified below for reference.

Contrast Ratios WCAG 2.0 AA

*For more or to see the actual ratios, visit Web Content Accessibility Guidelines : www.w3.org

This rationale seems evident when you consider that text is usually printed in black on white paper, the highest contrast ratio.

It’s a common human error to assume that others see the world precisely as you do, so it’s good to check with a benchmark when you’re unsure.

Which brings me to my next point:



6. Work out your Contrast Ratios

If you’re not sure about your visual breadth and range, consider taking a moment to work out your Delta score, a relative difference measure that can be used for colours.

Most people have their limit (threshold, or Just Noticeable Difference) to discern between the two shades of blue below. You can check their proximity, or degrees of freedom on a color palette / 3D colour cube if you’re interested to learn more (personally I have always been interested as an artist to know if these differences impact the way people respond to art, such as Picasso’s acclaimed Blue Period paintings).

Delta E-12

Design Tip:

I tried the Plugin ‘Check Contrast Ratio’ in my 2018 Photoshop program and it did most of the work for me. See image below, the tool checks if your text and contrast complies with the level of regulation your department requires (A, AA, or AAA WCAG compliance). I haven’t checked for updates, but I’d highly recommend it.

Plug-In Check Contrast Ratio

.

7. Optimise and scale to your platform

For web display, you only need to use RBG color and 72 ppi for optimal resolution (points per pixel vs. 300 dots per ink when printing) — and your display will load heaps faster. Anyone who follows dashboard analytics on a website will tell you that you’ll be lucky if someone lands on your page for more than 2-3 seconds, so fast loading time is a must.

You can easily add multiple ‘break points’ to your design code if you would like to display imagery on multiple devices. Bootstrapping is not specific to infographics, but you may want to consider implementing Responsive Design within your projects for more versatile outcomes in the future. This means you can pin infographic items separately for relative scalability according to various screen sizes of the devices they will be displayed on.

8. Use Font & Typography Best Practise Design



You can take the guesswork out of Font and Typography for optimal viewing, accessibility and reading speed by following scientific principles. For example, there’s a difference between Serif and Sans Serif fonts according to their display scale when shown on a screen. The light shines through them in different ways, resulting in varying eye fatigue onset times.

Rather than trying to sift through the infinite fonts and styles, here’s what I recommend, if you’re getting started with UX design as part of your infographic work. Calculate the optimal size of your font according to basic trigonometry principles (rather than by font or graphic), based on where the standard eye distance to screen should be for a given device.

For optimal reading times, it turns out that font size should roughly be 10 and when using a scale below that you should always defer to a Sans Serif font, as a rule of thumb.

Font Size Blue.jpg

Optimal Reading Scale: 8-12 point, for standard screen distance

9. Don’t Degrade Your Files

Work out the scale of your finished project first. There’s no point in designing good graphics if they aren’t visible at the scale they’re going to be shown at. Take for example the poster at the top of the page that was intended for a symposium, can’t convey any critical information on laptop screen because it is too hard to see.


File Compression

Don’t ever compress your images into JPEG’s until they are absolutely finalised and you have a copy of the original file. JPEG’s are lossy files meaning that they become increasingly degraded with every time they are saved; they also compress your working layers into one visible image, which is also important for vector designs, because you will have converted them into raster bitmap graphics (‘resolution-dependent’ or with limited scalability).

10. Use the right software

Finally, I recommend using industry standard software like Adobe where possible for a number of reasons. Firstly, if you’re working across platforms, you can easily transfer one document to another because all the formats are compatible, from concept design all the way to publication. You never need to worry about professional indemnity or intellectual property complications if you have your own copy of a licensed software.


My personal feeling is that some open source software may have glitches and bugs that are time consuming to fix if you don’t do programming. But also if you want to use your designs and skills professionally at work, you may as well set yourself up correctly with the programs you are assumed to be well versed in.

You won’t run the risk of having software incompatibilities with another team or group you may need to collaborate with if you’re all using the correct programs. Surprisingly, I see these problems as a software instructor often (usually when half of the team is trying to edit layouts on Word or Microsoft Office — *please* don’t do that!).

A final piece of advice: if you can do all of this, chances are that you can pitch a savings spreadsheet to your organisation outlining how you might do some ad hoc design work in your role, that is probably currently being outsourced to a design firm for 3x the cost, that you could do given a little pay rise. But be prepared to learn a whole lot more!

And there you have it — Infographics 101.


If you want to learn any creative software skills during the Coronavirus lockdown, feel free to contact me for a tutorial over Zoom or Skype. I teach for working professionals, but also for everyday creative enthusiasts.

Thanks for reading

and Happy Infographic Designing!