Why Your English Sounds Like a Slack Message
How Pronunciation Quietly Decides Promotions for Non-Native Developers in US Tech
This article is about pronunciation specifically — the second level of the language ceiling, and the place most non-native developers in US tech actually stall.
You're Reading Between the Lines — Already
A message comes in from your tech lead: "ok thanks."
You read it twice. Is she annoyed? Is she just busy? You scroll up. The previous message was your design proposal. No emoji. No follow-up. Just "ok thanks."
You open a DM to a coworker: "does she sound mad to you?"
This happens all day. Slack messages that could go either way. A "sure" that might be reluctant or genuine. A smiley face that someone uses warmly but you've learned to read as passive-aggressive in this company. A reply with no exclamation point where you'd expect one. You re-read your own messages before sending them. You add "(joking!)" in parentheses to be safe. You replay short replies in your head, trying to hear the tone.
You already know what's going on. Text is a lossy channel. The literal meaning gets through. Tone, irony, warmth, urgency, confidence — most of that gets stripped out somewhere between the keyboard and the screen. So everyone compensates. Emoji. Punctuation. Explicit framing like "no rush" or "just my two cents" (idiom: just my opinion, take it or leave it). It's a workaround for a missing dimension.
Here's the part most non-native speakers don't realize.
When you speak English in a meeting — even fluently, even with strong vocabulary — the same compression is happening. Not because you're using the wrong words. Because the part of speech that carries tone, irony, warmth, urgency, and confidence — pronunciation, prosody (technical term: the rhythm, stress, and intonation of speech), the way you pace your sentences — isn't fully tuned yet. The literal meaning gets through. The rest of you doesn't.
Your colleagues aren't consciously noticing. But they're filling in the gaps the same way you fill in gaps in a Slack message. And the conclusions they reach aren't always the ones you'd want.
This article is about the missing channel: what's actually on it, why it matters more than most non-native speakers realize, why studying harder won't fix it, and what will.
Charming in a Café, Costly in a Standup
Have you ever heard someone speak your native language as a foreigner? It's charming. You appreciate the effort. When they mix up a word or land an idiom slightly wrong, it's funny — sometimes endearing, sometimes the highlight of the conversation. In casual settings, the gap between what they meant and what they said is part of the experience.
Work isn't a casual setting.
In a meeting, speed matters. Speaking slowly reads as uncertainty. Hesitating reads as not knowing the answer. A pause your colleague would interpret as "thinking carefully" gets read as "doesn't have a position" when it comes from you. Imprecision implies you don't fully understand the system. A delivery that lands flat implies you don't fully believe what you're saying.
Conflict is where the gap shows up the most. Someone challenges your design. A senior engineer pushes back (phrasal verb: challenges or resists) on your estimate. A PM tries to reopen a decision that was already made. You need to respond in seconds. You need to sound measured, not defensive. You need to hold your ground (idiom: maintain your position under pressure) without sounding rigid. You need to disagree without sounding hostile. All of that gets carried by tone, pacing, and word choice — the exact things that get compressed when you're operating in a second language.
And it's not just conflict. It's small talk before the meeting starts. It's a quick joke that lightens a tense moment. It's saying "good point" in a way that sounds like you mean it, not like you're conceding. It's joining a conversation that's already moving, not waiting for a pause that never comes. It's all the moments where being technically correct isn't enough — where you also have to sound right.
When a non-native speaker talks in English without trained pronunciation and prosody, the same compression is happening as in a Slack message — but louder and in higher stakes. The literal meaning gets through. Everything else — confidence, authority, warmth, group membership, social status, the sense that you belong in this room — gets stripped out or distorted on the way to the listener. The words land. The signal that says this is someone to take seriously often doesn't.
Native speakers aren't running this calculation consciously. They're doing what you do when you read "ok thanks" — filling in the missing channel from cues they don't even realize they're picking up on. And when those cues are degraded, the picture they assemble of you is degraded too. Not because they're consciously prejudiced. Because the channel is lossy and their brain is doing what brains do.
This is the language ceiling at Level 2, made specific. It's not that your English is bad. It's that the part of your English that broadcasts who you are — the trained motor and prosodic layer — isn't loud enough yet to overwrite the impression the missing signal creates.
The Hidden Layer Where Promotions Are Decided
Work isn't just where you do the job. It's where you operate inside a web of relationships, hierarchies, and unstated rules. Roles are the formal layer — the org chart, the titles, the reporting lines. That's the tip of the iceberg (idiom: the small visible part of something much larger).
Underneath is everything else. Who has influence beyond their title. Who gets pulled into the room when a hard decision is being made. Whose opinion the VP asks for first. Who gets the benefit of the doubt (idiom: trust extended in unclear situations) when something goes wrong, and who has to explain themselves. None of this is written down. All of it is real.
Formal layer
- Org chart
- Titles
- Reporting lines
The political layer
- Influence beyond title
- Who gets pulled into the room
- Whose opinion the VP asks first
- Who gets the benefit of the doubt
This is the layer where promotions are actually decided. And on this layer, what you say matters less than how you say it. A correct answer delivered without conviction reads as a guess. A confident take on a half-formed idea reads as leadership. Skill alone doesn't move you up (phrasal verb: get you promoted). Skill plus the ability to project authority does. People who haven't experienced this from the outside often don't see it happening at all.
This is what the Wharton and UC Irvine study was actually measuring.
Evaluators heard the same job interview script read by native and non-native accented speakers. Same words, same qualifications. The non-native accented candidates were 16% less likely to be recommended for management. In a follow-up study, non-native accented entrepreneurs were 23% less likely to receive funding.
The detail that matters: evaluators didn't rate non-native speakers as harder to understand. Comprehension was fine. They rated them lower on "political skill" — the perceived ability to influence, persuade, and navigate people.
Read that again with the Slack analogy in mind.
Evaluators were assembling a picture of who these candidates were as operators in the political layer. They were doing it from the same cues people always use: tone, rhythm, pacing, the micro-confidence carried by stress and intonation, the small acoustic signals of belonging. When those cues are compressed or distorted, the picture comes out wrong. Not "this person is hard to understand" — they explicitly weren't. Something more like "this person doesn't quite sound like a leader."
That gap — between what was actually said and what evaluators concluded about the speaker — is the missing bandwidth in action. It's what gets stripped from spoken English when the motor and prosodic layer isn't fully trained. And it's what determines whether you get pulled into the rooms where decisions are made.
The most uncomfortable part: most of the people making these judgments don't know they're making them. Your manager isn't sitting in a calibration meeting thinking "this person has poor prosody, therefore not promotable." They're saying things like "I'm not sure they're ready for the next level" or "I don't see them as a leader yet." The mechanism is invisible to them, which is why no one names it for you. You can be the best engineer on the team and still be quietly losing the political read every week.
Why Studying Harder Won't Move the Needle
(idiom: produce measurable improvement)
Think about how a basketball player gets better.
Two things happen in parallel, and they barely touch. One is studying the game — watching film, reading the playbook, learning which sets to call against a 2-3 zone, memorizing the scouting report on the other team's center. You can do all of this on a couch. You get better at it the way you get better at any subject: read more, think more, take notes.
The other is shooting ten thousand jumpers (basketball: jump shots). Footwork. Release point. The exact angle of the elbow. The way the wrist snaps. None of this gets better by reading. You have to do it, miss, adjust, do it again. A coach watches you and says "your guide hand is pushing the ball." You try to fix it. You miss in a new way. You adjust again. After a few thousand reps the motion starts to feel automatic.
Now consider what happens if a player only does the first part. They become a brilliant analyst of the game who can't make a contested layup. They know exactly what shot to take and can't take it.
Language has the same split. The two halves even live in different parts of the brain.
Declarative learning (technical term) — knowledge you can consciously study, recall, and explain. Facts, rules, vocabulary. Stored mainly in the hippocampus and cortex. Improves through reading and review. Studying the game.
Procedural learning (technical term) — motor patterns your body executes without conscious thought. Built through repetition with feedback. Stored in the motor cortex, basal ganglia, and cerebellum. Does not improve through reading. Shooting jumpers.
Declarative
studying the gameWhat it handles
- Vocabulary
- Grammar rules
- Reading comprehension
- Memorizing idioms
- Translating in your head
How it works
- Hippocampus and cortex
- Conscious, fast, study-friendly
- Improves by reading and review
Procedural
shooting jumpersWhat it handles
- Pronunciation
- Prosody and intonation
- Speech rhythm and pacing
- Producing idioms in real time
- Hearing native speech in real time
How it works
- Motor cortex, basal ganglia, cerebellum
- Unconscious, slow, repetition-only
- Improves only by repetition with feedback
These are not just different skills. They live in different parts of the brain. They follow different rules. Improving one does not improve the other.
This is why most language education stalls people exactly where you're stalled.
Apps, textbooks, courses, flashcard decks — almost everything in mainstream language learning lives in the declarative column. It's gradeable. It scales. You can test it with a multiple-choice question. You can show clean progress on a dashboard. So that's what gets built. You can finish a 500-day streak on a vocabulary app, pass a C1 reading exam, and still pronounce English in a way that makes a VP unconsciously rate you lower on "political skill."
Studying harder maxes out the declarative column. It does almost nothing for the procedural one. You can read every grammar book ever written and your tongue still won't know where to go for an English /r/ that doesn't exist in your native language.
The procedural column is the missing bandwidth from the Slack analogy. It's also the part that decides whether evaluators read you as a leader. And it's the part that almost no language program is actually training.
If studying doesn't move pronunciation, what does?
Three places the procedural side can break. You may not know what your mouth should be doing for sounds your native language doesn't have. You may not be able to hear the gap between a native speaker's version and yours. Or your individual sounds may be fine but your rhythm and pacing give you away. There's a 30-second self-test at the end of this article that tells you which one is yours. For now, here's the mechanism that lets you fix any of them.
What Actually Trains Pronunciation — Rewiring Your Brain
Motor learning. Repetition with feedback. That's the whole answer.
It's also not a metaphor anymore. Brain imaging now lets researchers watch the procedural side rewire itself in real time when someone practices pronunciation. The mechanism that was invisible thirty years ago is visible on a scan today.
Neural rewiring (technical term) — physical changes in the brain's wiring that result from learning. New connections form between neurons; existing connections strengthen or weaken; the white matter that insulates fast-signaling pathways thickens. Visible on fMRI as changes in activation patterns, and on diffusion imaging as changes in structural connectivity. Also called neuroplasticity. Not a metaphor. Actual rewriting of the hardware.
A 2023 fMRI study scanned native English speakers learning Arabic phonetic contrasts over three days of training — about three hours of practice in total. After three hours, researchers could see measurable changes in the inferior frontal gyrus and the cerebellum: exactly the procedural-system regions you'd expect, activating more strongly and even showing structural changes in the underlying white matter. Three hours of focused practice. Visible rewiring on the scan. Not metaphorical. Actual neural reorganization, captured by an MRI machine.
The reason this works is that motor learning runs on a feedback loop that's been mapped to specific brain regions. Frank Guenther's lab at Boston University spent two decades building a computational model of speech motor control called DIVA and validating it with fMRI. The model says something simple: when you produce a sound, your brain compares the sound you intended to make against the sound you actually made, generates an error signal from the gap, and adjusts the next attempt. Over thousands of repetitions, the motor commands get tuned until intended and actual line up. The loop is what does the rewiring.
You can watch it close on a scan. In one experiment, researchers played speakers' own voices back to them through headphones, but secretly altered the pitch. The speakers' brains noticed within milliseconds. Activity spiked in the auditory error region of the brain, and within a fraction of a second, motor regions adjusted the next utterance to compensate. The speakers weren't consciously aware of doing this. The loop runs underneath awareness. It's how you learned to talk in the first place, and it's the only mechanism that can rebuild your pronunciation in a second language.
Three things follow from this.
First, repetition without feedback doesn't work. If your brain can't compare intended to actual, there's no error signal, and no error signal means no adjustment. Just talking more in English doesn't fix pronunciation — plenty of people have lived in an English-speaking country for thirty years and kept the same accent they arrived with. The reps weren't the bottleneck. The feedback was.
Second, feedback has to be precise enough for the loop to use. "You sound a little off" doesn't generate an error signal you can act on. "Your tongue is too far back for that vowel" does. This is why a tutor with phonetic training, recording yourself and comparing against a native speaker, or software that visualizes your pitch and formants all work, while well-meaning native speakers saying "almost!" mostly don't.
Third, this can move fast. Three hours produced visible rewiring in the Arabic study. People who train pronunciation deliberately — even for fifteen minutes a day — usually start hearing the difference in themselves within weeks. The procedural system is slow compared to the declarative one, but it's not glacial. It just needs the right inputs.
So what does training the right column actually look like in practice?
Three Techniques That Actually Train the Procedural Side
Most pronunciation advice tells you to practice more. That's like telling the basketball player to shoot more jumpers. Technically correct. Useless without specifying what kind of practice closes the feedback loop.
Three techniques do. They're not new. They've been used by phoneticians, accent coaches, and serious language learners for decades. What's new is that brain imaging now explains why each one works — and why the alternatives most apps offer don't.
Each technique trains a different part of the loop: the conceptual map, the input side, and the output side.
Training the Individual Sounds
The first thing your motor system needs is a precise idea of what it's trying to produce. Vague is not actionable. "An English /r/" doesn't tell your tongue where to go. "An alveolar approximant — tongue tip near but not touching the ridge behind your teeth, no contact, no friction" does. The first instruction is a label. The second is something your tongue can act on.
This is what articulatory phonetics gives you — a description of speech sounds in terms of where in the mouth they're made, what the tongue and lips are doing, whether the vocal folds vibrate, whether air flows through the nose. For sounds your native language doesn't have, articulatory descriptions tell you exactly what to set up before you try to produce the sound. They give your motor system its target.
The most common way to get this is by learning a small amount of the International Phonetic Alphabet (IPA) — a notation system where every symbol represents exactly one sound. The value isn't really the symbols. It's that learning IPA forces you to learn articulatory phonetics, and once you have, you can read pronunciation dictionaries precisely instead of relying on respellings like "kuh-WAH-sahn," which encode your native accent into the answer.
You don't have to write IPA fluently. You need to read it well enough to know what your mouth should be doing for the dozen or so English sounds your native language doesn't have. A few hours with the IPA chart for English vowels and consonants is enough to start.
What it looks like in practice: when you encounter a word you're not sure how to pronounce, look up its IPA transcription instead of trying to imitate audio cold. The transcription tells your motor system the target. The audio tells you whether you hit it.
Tuning the Input Side
Minimal pairs (technical term) — two words that differ by exactly one phoneme. Ship and sheep. Rice and lice. Bit and bet. The kind of contrast your native language may not make, which means your ear may not register it.
Adult learners often literally can't hear the difference between two phonemes their first language treats as the same sound. Your brain spent decades categorizing sounds according to your native phoneme inventory, and it filters out distinctions that didn't matter. Japanese speakers don't reliably hear English /r/ vs. /l/. Spanish speakers merge English /i/ and /ɪ/ (the vowels in sheep and ship). Mandarin speakers flatten consonant clusters their native phonology doesn't allow.
If you can't hear the contrast, you can't produce it reliably. The error signal in the feedback loop depends on your auditory system noticing a gap between intended and actual. If your ear has been trained to ignore that gap, the loop never closes — and you can repeat a word ten thousand times without your pronunciation moving.
Minimal pair drills retrain the perceptual filter. You hear two words, identify which one was spoken, get immediate feedback. After enough trials, the categories sharpen and you start hearing distinctions you couldn't hear before. Classic studies on Japanese speakers learning English /r/ and /l/ showed that perceptual training transferred to production gains — once trainees could hear the contrast reliably, their own production of those sounds improved too, without separate production training. Tuning the input side helps tune the output side.
What it looks like in practice: a few minutes a day of minimal pair listening drills targeting the specific contrasts your native language doesn't make. Because those contrasts are predictable from your L1 — Japanese speakers, Spanish speakers, and Mandarin speakers each need different drills — this works best inside a course built around your specific blind spots, rather than generic listen-and-repeat exercises.
Training the Output Side at Native Speed
Shadowing (technical term) — repeating what a native speaker says in real time, with as little delay as possible — ideally less than a second behind. You're not pausing, translating, or analyzing. You're trying to mirror the stream of speech as it happens, including rhythm, intonation, and the way words blur into each other.
Shadowing works because it forces your motor system to keep up with native pacing and natural prosody, which you can't get from reading aloud or from slow careful repetition. It also trains your ear: to shadow well you have to actually parse what's coming in, not just recognize isolated words.
The reason shadowing closes the feedback loop is that it stacks intended and actual on top of each other in real time. You hear the model. You produce your version a beat behind. Your brain has both signals available simultaneously and can compute the gap immediately. That's the error signal. Most language practice doesn't generate one — you say a sentence, and there's nothing to compare it against. Shadowing makes the comparison automatic.
It's also why shadowing trains things minimal pairs and articulatory phonetics can't: rhythm, sentence stress, the natural reductions that happen in connected speech ("didja eat yet?" instead of "did you eat yet?"). These are prosodic features, not segmental ones. They're the missing bandwidth from the Slack analogy made audible — the confidence, the timing, the sense of belonging in the conversation. You can have perfect individual phonemes and still sound foreign because your rhythm is wrong. Shadowing is the only one of the three techniques that fixes that directly.
What it looks like in practice: pick a recording of a native speaker — a podcast clip, a video, a TED talk. Start with the transcript visible. Play the audio and shadow it, staying as close behind the speaker as you can. Do the same passage several times. Eventually drop the transcript. It also helps to do this while walking — the movement keeps you from over-analyzing and engages your body. Five to fifteen minutes a day moves the needle within weeks.
Why These Three, In This Order
The three techniques aren't interchangeable. They train different parts of the same loop, and they stack.
Training the individual sounds gives you the conceptual map: a precise idea of what your mouth should be doing. Minimal pairs tune your perception: the input side of the loop, so your brain can hear the gap between intended and actual. Shadowing trains the output side at native speed: rhythm, prosody, and the motor execution that closes the loop in real conversations.
Skip the first and you're guessing at intended. Skip minimal pairs and your ear can't compute the gap. Skip shadowing and you can produce isolated sounds correctly while your sentences still sound foreign because the rhythm is wrong.
Together they're the closest thing to a complete pronunciation training program — and they're almost entirely missing from the language apps and courses most learners actually use.
Which Part of the Loop Is Broken for You?
Before you train all three, it helps to know which part of the loop is the weakest for you right now. The diagnostic doesn't take a coach. It takes thirty seconds and your own voice.
Open the recorder on the GrokEnglish home page. Hear a native speaker say a real tech phrase. Record yourself saying the same thing. Play them back to back. Then check which of these matches what you noticed:
Mouth
"I wasn't sure what my mouth should have been doing."
Ear
"The gap between the native version and mine was vague — I couldn't locate it."
Rhythm
"My rhythm and pacing felt off, even when individual sounds were close."
You'll usually find at least two of these are true. That's normal. Start with the one where the gap was clearest — that's the loop being most actively broken right now.
A Word About Effort
This isn't easy. It also isn't quick.
Most language apps won't tell you this. Their business depends on selling the feeling of progress — streaks, badges, "you're 73% fluent in Spanish" dashboards. Real procedural change doesn't fit that model. It's slower, less photogenic, and harder to gamify. So most of the market quietly skips it and sells the declarative work instead, dressed up to look like everything you need.
You probably know this pattern from your own profession. Becoming a strong engineer isn't quick either. You read papers. You debug at 2 AM. You build systems that fail and rebuild them. You don't expect a 15-minute tutorial to make you a staff engineer. You expect a real skill to take real work, and you do the work because the goal is worth it — promotion, autonomy, the kind of role that pays you to think hard about hard problems.
Pronunciation is the same kind of skill. Fifteen minutes a day for a few months will move it. Five minutes a day for a few weeks will start to. But there is no version of this where you don't show up and run the loop.
That's the assumption GrokEnglish is built on. We're not promising you'll sound native by Friday. We're giving you a tool that makes the loop easy enough to run, designed for people who already know how to put in deliberate work and just need the right thing to put it into. The reps are yours. The mechanism is what we provide.
What to Do This Week
You don't have to overhaul your routine. You have to start the loop.
The single most important shift is moving even fifteen minutes a day from the declarative side to the procedural side. From flashcards to repetition with feedback. From reading about English to producing it and comparing it to a model. That's the move. Everything else is dosage.
Here's the simplest version of the loop, the one you can run today:
- Pick a phrase you'd actually use at work. Not a textbook sentence. Something you'd say in standup or to a stakeholder. "Let's circle back (idiom: return to the topic later) on this after the design review." "I'd push back on that estimate." "The root cause was a race condition in the caching layer."
- Hear a native speaker say it. Not your own attempt. The target.
- Record yourself saying the same phrase. Don't think about it too much. Just say it.
- Listen back, side by side. Where do you hear the gap? A vowel that's off? A stress on the wrong syllable? Pacing that's too even, too flat, or too slow? You don't need a coach to start noticing this. The gap shows up immediately.
- Try again. That's the loop. Native model → your version → comparison → adjustment → next attempt. The feedback loop from earlier, run by hand.
Five reps a day on a single phrase, for a week, is enough to start hearing yourself differently. That's not a metaphor — that's the rate the procedural system actually moves at when you give it real input.
The hardest part is doing it consistently. Most people skip it not because it's hard but because it feels small. It is small. The compounding is what isn't.
A faster version of the loop
If running the loop manually is friction enough that you won't do it, use a tool that does the friction for you. We built the recorder on the GrokEnglish home page for exactly this purpose: hear a native speaker say a real tech phrase, record yourself saying the same thing, play them back to back. Three taps, thirty seconds, and the gap is immediately visible.
It's the same loop. The tool just removes the steps where most people quit — finding a model, transcribing it, setting up a recording, playing both clips next to each other. That setup is small but it's exactly the kind of friction that turns I'll do it tomorrow into I never started.
If you want pre-loaded tech terms instead of picking your own phrase, the GrokEnglish dictionary has 100+ software-dev words with native pronunciations ready to play. One click and you're in the same loop — hear, record, compare — but with the vocabulary you actually use at work. Try a few:
If you want to go further, the three techniques layer on top of this loop, not instead of it:
- For sounds your native language doesn't have, spend an hour with the IPA chart so you know what your mouth should be doing before you record.
- For phoneme contrasts you can't reliably hear, run a few minutes of minimal pair drills before you practice phrases that contain those contrasts. Train the ear before the mouth.
- For rhythm and prosody, shadow a podcast clip or a TED talk for five to ten minutes during your commute or your walk. Stay close behind the speaker. Don't pause to analyze.
Each of these layers does something the basic record-and-compare loop alone won't. But the basic loop is the one you should start with this week, because doing it once is the difference between believing the procedural side can change and watching it start to.
Pick one thing and do it before Friday
You've read the framework. You know why studying harder isn't moving your pronunciation. You know what motor learning is and what feedback loop builds it. The hardest part now is the same as it was at the end of the Language Ceiling article: closing the tab and actually doing one thing.
So make it one thing.
Record yourself saying one technical phrase. Compare it to a native speaker. Notice one specific gap — a vowel, a stress pattern, a piece of rhythm. Try the phrase again. That's it. That's the whole loop. The procedural side starts moving the moment you run it once.
Three hours of focused practice produced visible rewiring on the scan in the Arabic study. You're not asking your brain to do anything it isn't built for. You're just asking it for the right kind of input.
Start this week.
Phrases From This Article
Idioms
- just my two cents — just my opinion, take it or leave it
- hold your ground — maintain your position under pressure
- tip of the iceberg — the small visible part of something much larger
- the benefit of the doubt — trust extended in unclear situations
- move the needle — produce measurable improvement
- circle back — return to the topic later
Phrasal Verbs
- push back — challenge or resist
- move (someone) up — get someone promoted
Technical Terms
- prosody — the rhythm, stress, and intonation of speech
- declarative learning — knowledge you can study, recall, and explain (vocabulary, grammar, facts)
- procedural learning — motor skills built through repetition with feedback (pronunciation, prosody)
- neural rewiring / neuroplasticity — physical changes in the brain that result from learning
- articulatory phonetics — describing speech sounds by what the mouth and vocal tract are doing
- IPA (International Phonetic Alphabet) — notation system where each symbol maps to one sound
- minimal pairs — two words that differ by exactly one phoneme (ship / sheep)
- shadowing — repeating a native speaker's audio in real time, less than a second behind
Sources
- Huang, L., Frideger, M., & Pearce, J. L. (2013). "The Price of Accent: Evaluator Accent, Persuasion, and Entrepreneurship." Journal of Applied Psychology, 98(6), 1005–1017. https://pubmed.ncbi.nlm.nih.gov/23937299/
- Spence, J. L. et al. (2024). "A meta-analysis of accent discrimination in hiring decisions." Society for Personality and Social Psychology. https://spsp.org/news/character-and-context-blog/spence-accent-discrimination-hiring
- Lev-Ari, S., & Keysar, B. (2010). "Why don't we believe non-native speakers? The influence of accent on credibility." Journal of Experimental Social Psychology, 46(6), 1093–1096. https://doi.org/10.1016/j.jesp.2010.05.025
- Gluszek, A., & Dovidio, J. F. (2010). "The Way They Speak: A Social Psychological Perspective on the Stigma of Nonnative Accents in Communication." Personality and Social Psychology Review, 14(2), 214–237. https://doi.org/10.1177/1088868309359288
- Hellbernd, N., & Sammler, D. (2016). "Prosody conveys speaker's intentions: Acoustic cues for speech act perception." Journal of Memory and Language, 88, 70–86. https://doi.org/10.1016/j.jml.2016.01.001
- Mehrabian, A. (1971). Silent Messages: Implicit Communication of Emotions and Attitudes. Belmont, CA: Wadsworth.
- Lapakko, D. (2007). "Communication is 93% Nonverbal: An Urban Legend Proliferates." Communication and Theater Association of Minnesota Journal, 34, 7–19. https://cornerstone.lib.mnsu.edu/ctamj/vol34/iss1/2/
- Kruger, J., Epley, N., Parker, J., & Ng, Z.-W. (2005). "Egocentrism over e-mail: Can we communicate as well as we think?" Journal of Personality and Social Psychology, 89(6), 925–936. https://doi.org/10.1037/0022-3514.89.6.925
- Squire, L. R. (2004). "Memory systems of the brain: A brief history and current perspective." Neurobiology of Learning and Memory, 82(3), 171–177. https://doi.org/10.1016/j.nlm.2004.06.005
- Squire, L. R., & Dede, A. J. O. (2015). "Conscious and Unconscious Memory Systems." Cold Spring Harbor Perspectives in Biology, 7(3), a021667. https://doi.org/10.1101/cshperspect.a021667
- Henke, K. (2010). "A model for memory systems based on processing modes rather than consciousness." Nature Reviews Neuroscience, 11(7), 523–532. https://doi.org/10.1038/nrn2850
- Tourville, J. A., & Guenther, F. H. (2011). "The DIVA model: A neural theory of speech acquisition and production." Language and Cognitive Processes, 25(7–9), 952–981. https://pmc.ncbi.nlm.nih.gov/articles/PMC3650855/
- Tourville, J. A., Reilly, K. J., & Guenther, F. H. (2008). "Neural mechanisms underlying auditory feedback control of speech." NeuroImage, 39(3), 1429–1443. https://pmc.ncbi.nlm.nih.gov/articles/PMC3658624/
- Guenther, F. H. (2016). Neural Control of Speech. Cambridge, MA: MIT Press.
- Simmonds, A. J., Wise, R. J. S., & Leech, R. (2011). "Two Tongues, One Brain: Imaging Bilingual Speech Production." Frontiers in Psychology, 2, 166. https://doi.org/10.3389/fpsyg.2011.00166
- Alotaibi, S., Alsaleh, A., Wuerger, S., & Meyer, G. (2023). "Rapid neural changes during novel speech-sound learning: An fMRI and DTI study." Brain and Language, 245, 105324. https://doi.org/10.1016/j.bandl.2023.105324
- Reiterer, S. M., Hu, X., Erb, M., Rota, G., Nardo, D., Grodd, W., Winkler, S., & Ackermann, H. (2011). "Individual Differences in Audio-Vocal Speech Imitation Aptitude in Late Bilinguals: Functional Neuro-Imaging and Brain Morphology." Frontiers in Psychology, 2, 271. https://doi.org/10.3389/fpsyg.2011.00271
- Lee, J., Jang, J., & Plonsky, L. (2015). "The Effectiveness of Second Language Pronunciation Instruction: A Meta-Analysis." Applied Linguistics, 36(3), 345–366. https://doi.org/10.1093/applin/amu040
- Logan, J. S., Lively, S. E., & Pisoni, D. B. (1991). "Training Japanese listeners to identify English /r/ and /l/: A first report." Journal of the Acoustical Society of America, 89(2), 874–886. https://doi.org/10.1121/1.1894649
- Bradlow, A. R., Pisoni, D. B., Akahane-Yamada, R., & Tohkura, Y. (1997). "Training Japanese listeners to identify English /r/ and /l/: IV. Some effects of perceptual learning on speech production." Journal of the Acoustical Society of America, 101(4), 2299–2310. https://doi.org/10.1121/1.418276
- Hamada, Y. (2016). "Shadowing: Who benefits and how? Uncovering a booming EFL teaching technique for listening comprehension." Language Teaching Research, 20(1), 35–52. https://doi.org/10.1177/1362168815597504
- EEOC. "Enforcement Guidance on National Origin Discrimination." https://www.eeoc.gov/laws/guidance/eeoc-enforcement-guidance-national-origin-discrimination
- GrokEnglish. "The Language Ceiling: A Developer's Guide to Communication That Gets You Promoted." the-language-ceiling.html