1
2 MIN READ

Can You Use ‘Literally’ Figuratively?

Do you literally mean “literally,” or do you mean “figuratively”? Even the most well-respected dictionaries have loosened up the restrictions on “literally.”

by Bennett Kleinman
literally word on the torn paper background

During the dog days of summer, you may say, “It’s so hot, I’m literally melting.” But unless you’re a giant talking ice cream cone, that sentence is far from literal. The word “literally” means “in a literal sense,” which implies that you’re talking about something factually, precisely, and accurately. However, people often use the word in a figurative sense, which drives grammar pedants up a wall. You may be surprised (or reassured) to learn, though, that there are plenty of times when it’s OK to use “literally” figuratively — and reputable dictionaries agree.

Advertisement

The word “literally” was coined in the 1530s from the Latin literalis, meaning “of or belonging to letters or writing.” By the 17th century, that definition had already begun to shift as people increasingly used “literally” for hyperbole and in metaphors. In 1876, Mark Twain wrote in The Adventures of Tom Sawyer: “And when the middle of the afternoon came, from being a poor poverty-stricken boy in the morning, Tom was literally rolling in wealth.”

 This figurative usage continued to grow more widespread, and eventually, dictionaries took notice. Both the Oxford English Dictionary and Merriam-Webster contain seemingly opposing definitions for the word “literally,” stating it can be used both literally and figuratively. “Literally” is a Janus word, meaning it can act as its own opposite; other examples include “cleave” (which means both “to split” and “to adhere”) and “oversight” (“supervision” and “omission”). When Merriam-Webster updated the definition of “literally” in 2013, editors explained they “included this definition for a very simple reason: a lot of people use it this way, and our entries are based on evidence of use.”

If you find yourself using “literally” in the figurative sense, don’t worry, because you’re not alone, nor are you incorrect. If a grammar stickler gives you a hard time, just point them to the dictionary and go on your merry way.

Featured image credit: Sohel Parvez Haque/ Shutterstock
Advertisement
1
2 MIN READ

Why Doesn’t English Have Accent Marks?

While accent marks provide clear pronunciation guides in many world languages, they’re conspicuously absent in English. Why don’t we use these marks?

by Bennett Kleinman
Accents, Diacritical Marks, and Glyphs Concept on letter A's

If we say, “Let’s talk about English accents,” you might imagine a refined gentleman taking his tea and crumpets. But in this case, we’re talking about accent marks. While many other languages rely heavily on accent marks to guide pronunciation, English speakers rarely deal with tildes and umlauts (and in the rare cases you do come across an accent mark, odds are it’s a loanword from another language).

Advertisement

In a technical sense, accent marks are called “diacritical marks” — symbols added to letters in order to indicate a change in tone or stress. They include acute accents (é), diaeresis (ä), and cedillas (ç), just to name a few. Diacritical marks are quite common in other languages, as evidenced by many of the loanwords that are now a common part of the English lexicon: café, crème, doppelgänger, château, açai, piñata, the list goes on.

So why doesn’t English have diacritical marks of its own? The simplest answer is likely due to the invention of the printing press. During the 15th century, many early printers eliminated accent marks from words in an effort to make the printing process easier. While most English words have roots in languages with diacritical marks, the accentless versions have become the standard English versions. For example, the French versions of hôtel, début, and façade turned into “hotel,” “debut,” and “facade” in English — same spelling, same definition, no accent marks. 

Featured image credit: designium/ Shutterstock
Advertisement
1
2 MIN READ

Why Is ‘Mama’ the First Word for So Many Babies?

Many babies begin speaking with the simple words “mama” and “dada” — or variations thereof — and there are a few linguistic reasons for that.

by Bennett Kleinman
Woman holding baby in air

A baby’s first word is a major milestone in their development, no matter how short or long the word may be. That being said, if your baby suddenly says, “Worcestershire,” or “açai,” you have a genius on your hands. Most kids tend to say their first coherent words around the age of 1, and then begin to form complete sentences within the following year. Often speech begins with the simple words “mama” and “dada” — or variations thereof — and there’s a few reasons for that.

Advertisement

First off, the repetitive nature of the syllables makes them catchy and easy to repeat, and babies hear these terms more frequently than “hocus-pocus” or “flip-flop,” for example. In linguistic terms, the word “mama” starts with a labial consonant (“m”), which utilizes both lips in a similar fashion to how a baby would breastfeed or bottle-feed. This familiar mouth construction makes it much easier for babies to say the word “mama,” as they already know how to make that shape with their lips. 

The word “dada,” on the other hand, begins with a dental consonant, a term describing how the tongue pushes against the upper teeth to make these sounds. Because of this, it might come after “mama.” However, saying “dada” doesn’t require learning how to push air out through the nose, which is a skill that takes time for babies to develop. Some babies may say “mama” and “dada” because they associate the sounds with food or comfort, not necessarily because they’re a direct reference to mom and dad.

If a baby hasn’t started babbling with “mama” or “dada,” another short and/or repetitive term such as “uh oh” or the ubiquitous “no” might be coming soon. Studies show that learning and using repetitive versions of words (“night-night” and “choo-choo,” for example) at a young age may actually help children in vocabulary learning. This makes words like “mama” and “dada” excellent options if you’re trying to encourage your child to say their first word.

Featured image credit: Dakota Corbin/ Unsplash
Advertisement
1
2 MIN READ

What Is a “Schwa”?

The schwa is a vowel sound depicted in pronunciation guides as an upside-down “e.” If you can say “uhhhhh,” you can say a schwa.

by Bennett Kleinman
Schwa letter

Before we talk about the word “schwa,” and what this symbol does, let’s just acknowledge how fun it is to say over and over. Schhhwaaaa … schhhwaaaaa … OK, that’s enough of that.

Advertisement

“Schwa” is the name for the upside-down “e” symbol (ǝ) often seen in the dictionary in the pronunciation guide as part of the International Phonetic Alphabet (IPA). It’s derived from the Hebrew word shva — a vowel that serves a similar purpose and appears in writing as two vertical dots beneath a letter. When it comes to pronunciation, the “ǝ” designates an unstressed syllable pronounced more like “uh.”

What makes a schwa special is that it often shows up even when there doesn’t appear to be a letter. Take “rhythm,” for example — while there’s no vowel seen between “th” and “m,” the IPA pronunciation shows “ əm” as the second syllable. The word is pronounced “RIH-thuhm” with the schwa symbol giving an unstressed vowel “uh” sound in the second syllable. 

In addition to Hebrew and English, you’ll find schwas in at least 10 other languages, including Albanian, where it’s written as a diaeresis (two dots) over an “e.” In Dutch, it’s a digraph (two letters) written as “ij.” In Malay, schwas are shown using a symbol that looks like an upside-down “v.” No matter how the schwa is written, it sounds the same (the unstressed “uh” sound) in every language that uses it.

Featured image credit: akinbostanci/ iStock
Advertisement
1
2 MIN READ

When Should You Actually Use an Exclamation Point?

It’s easy to go overboard with exclamation points, but overindulging causes this mark to lose its punch. When is it appropriate to use an exclamation point?

by Bennett Kleinman
Chalk hand drawing in exclamation mark

Hey!!! The exclamation point is one of the most common, yet frankly overused symbols in modern language. This familiar punctuation mark primarily denotes emphasis, or can be used as a warning if it’s written on its own! An early version of the symbol originated during the Middle Ages (!) and over time, this simple punctuation mark has blossomed into the popular symbol used today!!!!!

Advertisement

As the previous paragraph demonstrates, however, it’s easy to go overboard with exclamation points. They should be used far more sparingly than they are — even professional writers are guilty of an overly excited text!!! Sure, there are times when it’s appropriate to use one, but overindulging will cause the exclamation point to lose its punch. If you feel like you’ve been overusing exclamation points when texting or posting on social media, perhaps it’s time for a detox. Here’s how to cut back and use them in a more reasonable manner.

To understand when to use an exclamation point, let’s first review when not to use them. As a basic rule of thumb,  if you’re writing a work email, a job application, a condolence card, or any similarly serious correspondence, you should eschew them altogether. Exclamation points detract from the serious and professional nature of any setting. Let’s look at two examples — “I’m sorry for your loss.” vs. “I’m sorry for your loss!” Using a period conveys an aptly somber feeling, while an exclamation point diminishes the situation and makes it feel almost goofy or celebratory.

In general, it’s best to use exclamation points extremely sparingly. That being said, they are useful for conveying legitimate feelings of shock and awe. If you’re writing out a “Wow!” or “No!” then it’s perfectly OK to use an exclamation point for emphasis. And if you’ve managed to corral your exclamation point usage, that single punctuation mark will deliver the bang you need it to. 

Featured image credit: Bankrx/ Shutterstock
Advertisement
1
2 MIN READ

What’s the Difference Between Bugs and Insects?

When is it appropriate to call something a bug vs. an insect? To understand this debate, let’s travel back to a middle school science class lesson.

by Bennett Kleinman
Insect crawling across sand

For most people, the terms “bugs” and “insects” are used interchangeably to refer to any itsy-bitsy critter flying through the air or crawling on the countertop. But etymologically, there are a few small, yet notable distinctions that differentiate the two words. So when is it appropriate to call something a bug vs. an insect? To understand this debate, let’s travel back to a middle school science class lesson.

Advertisement

Scientists classify living organisms using a seven-tiered taxonomy scale: kingdom, phylum, class, order, family, genus, and species. (A common mnemonic to help remember this order is: King Phillip Came Over For Good Soup.) These distinct groups range from the most general at the top (kingdom) down to the most specific (species). Within the kingdom Animalia, you’ll find a vast array of living creatures that includes humans, birds, baboons, and beetles, just to name a few. But once you drop down a level to phylum, you begin to see significant differences both biologically and etymologically.

The phylum Arthropoda includes pretty much all the critters that we’d refer to as “bugs” — beetles, spiders, moths, centipedes, ticks, ladybugs, you name it. However, it also includes crabs, lobsters, shrimp, and any other creatures with an exoskeleton, a segmented body, and jointed appendages. The specific term “bug” originated in Middle English as the word “bugge,” referring to a “frightening scarecrow.” Over time, its use was expanded to include anything that induced fright, such as creepy-crawlies. 

The term “insect,” meanwhile, is derived from the class Insecta, which is a more specific grouping further down the taxonomic scale. By their literal definition, all insects have an exoskeleton, head, thorax, abdomen, three pairs of jointed legs, and a pair of antennae. This means that the term can be used to describe ants, beetles, butterflies, cockroaches, grasshoppers, fleas, termites, and many more creatures. At the same time, it would be inaccurate to refer to centipedes or spiders as “insects” since they lack all those features. (Centipedes are in the class Chilopoda, and spiders are in the class Arachnida.) When it comes to nomenclature, using the term “bug” for a crawly critter is almost always correct, as it’s a far more general term than “insect.”

Featured image credit: Aneez Mohammed/ Unsplash+
Advertisement
1
2 MIN READ

Why Do We Call Them ‘Wisdom Teeth’?

Some fanciful terms for the human body have existed for centuries, and are now more commonly used than their scientific alternatives. Let’s look at the origins behind wisdom teeth.

by Bennett Kleinman
Impacted wisdom teeth cartoon on paper

Imagine telling a non-native English speaker that after having your wisdom teeth removed, you slipped and hit your noodle on the door and then banged your funny bone on the way down. It sounds more like a children’s rhyme than a real-life accident, doesn’t it? But many of these fanciful terms for the human body have existed for centuries, and are now more commonly used than their scientific alternatives. Let’s look at the origins behind one of the most popular quirky anatomical nicknames: wisdom teeth.

Advertisement

Wisdom teeth are our third and final set of molars that generally appear between the ages of 17 and 25. They look just like the other molars, but they got a special nickname because of their timing. They usually break through the gums around the time that adolescents transition into adulthood. During these formative years, we grow more intelligent and we grow one more set of teeth, hence the nickname.

The etymological origins of “wisdom teeth” in English date back to the 1660s, when the molars were known as “teeth of wisdom.” But you can also go back thousands of years earlier and find that the ancient Greek physician Hippocrates referred to these teeth as sophronisteres — a translation of the phrase “prudent teeth.” In ancient Rome, the teeth were referred to in Latin as dentes sapientiae, which means “wisdom teeth.”

Referring to them as “wisdom teeth” is far from an exclusively English phenomenon, as many other languages also drew inspiration from those early Greek and Latin terms. In Spanish, the molars are known as muelas del juicio (“teeth of judgment”), and in Arabic they’re called ders-al-a’qel (“teeth of the mind”).

Featured image credit: Orawan Pattarawimonchai/ Shutterstock
Advertisement
1
2 MIN READ

How Did September Get Its Name?

The name for the ninth month of the year comes from the Latin word for “seven.” How did this mismatch happen?

by Bennett Kleinman
September made by small plastic cubes

As we flip the calendar from summer to fall, big changes start happening. One month you’re dressing up in a spooky costume, the next you’re carving up a turkey, and the next you’re singing carols in the snow. (And if you pay attention to retail stores’ decorations, you might never know what month you’re in.) But while September, October, November, and December are celebrated quite differently, they were all named in a similar manner by the ancient Romans.

Advertisement

The original Roman calendar had 10 months, beginning with March. The first four months of the year — March, April, May, and June — were named after gods and Latin verbs. The remaining months were named based on their order in the year. Quintilis and Sextilis — the original names of July and August — translate to “fifth month” and “sixth month.” Similarly, the names for September, October, November, and December corresponded to the (now-inaccurate) numbers of the months: septem (seven), octo (eight), novem (nine), and decem (10). 

When January and February were officially added to create the Julian calendar around 150 BCE, every month on the Roman calendar shifted back two spots. Even though September is now the ninth month of the year, it remains the “seventh month” in a lexicographical sense. This quirk also applies to October, November, and December.

Featured image credit: Smart Calendar/ Shutterstock
Advertisement
1
2 MIN READ

What Part of Speech Is the Word ‘Is’?

It’s one of the most commonly used words in the English language — so ubiquitous and so short that it slips by almost unnoticed. But “is” is important, and it does some heavy lifting.

by Bennett Kleinman
Paper word cards with text Part of speech

Most people have probably said the word “is” more times than their own name (it’s in this edition 30 times). It’s such a common term, in fact, that you might not have stopped to think about what “is” really … is. Thankfully, the answer is rather simple. In English, there are eight basic parts of speech: nouns, pronouns, verbs, adverbs, adjectives, articles, prepositions, and conjunctions. “Is” falls squarely into the verb category, as it’s a conjugation of the verb “to be.” 

Advertisement

“To be” is one of those irregular verbs that you just have to memorize. In the present tense, the conjugation is: “I am,” “you are,” “we are,” “they are,” and “he/she/it is.” “Is” is the third-person singular form.

“Is” commonly acts as a linking verb — a type of verb that doesn’t describe an action, but still builds a bridge between the subject and the predicate. Take these examples: “Mona is my cousin,” “Your dress is beautiful,” and “She is 25 years old.” In these instances, “is” links the subject (“Mona,” “dress,” “she”) to another noun (“my cousin”), adjective (“beautiful”), or longer clause (“25 years old”).

When “is” serves as an auxiliary verb, it’s a helper verb that lends support to the main verb of a sentence. For instance: “It is going to rain on Saturday” and “Mom is buying a cake for the party.” In both of these cases, “is” is not the main verb, but rather a verb that lends support to the main verbs “going” and “buying.” The purpose of using the auxiliary verb is to add meaning, clarify tense, or shape mood. 

Watch out for “is” and tipping your sentences into passive voice. We’ll cover this more in a future edition, but for now, if “is” is the main verb in your sentence, try to rework it to use a more active verb. (Example: “The guitar is being played by Jaime” vs. “Jaime played the guitar.”) Sentences that rely on “is” and passive voice can be unnecessarily confusing. This simple act of self-editing can bring your sentences into an unmistakably clear and active voice.

Featured image credit: tawanroong/ Shutterstock
Advertisement
1
3 MIN READ

What Does It Mean To ‘Cut the Cord’?

This idiom has modern technological connotations, but the original usage is much older. Let’s trace the roots of the independent meaning.

by Bennett Kleinman
Scissors cutting a computer wire

Starting about 10 to 15 years ago, “cutting the cord” became a big phrase in regard to technological independence. We were saying goodbye to wired internet connections, eschewing cable services in exchange for streaming platforms, and replacing home phones with mobile-only lines. “Cut the cord” has long been an idiom used to describe a greater sense of independence, but it is not limited to and didn’t start with technology (even though “cord” may imply as such). 

Advertisement

The word “cord” first appeared in English during the 1300s, long before cable television existed. It came from the Old French corde, and it meant “a string or small rope composed of several strands twisted or woven together.” During the late 14th century, “cord” took on a more figurative meaning, referring to anything that binds or restrains. In a technological sense, many materials have been used for cords. In the 1720s, scientists discovered that electricity could travel along metal, but the first power distribution system wasn’t invented until 1882, by Thomas Edison. He used copper rods, wrapped in a natural jute fiber, overlaid with a coal-like substance. The next progression came from Charles Goodyear: He patented vulcanized rubber in 1844 and it was applied to electricity in the late 19th century through the 1940s. The most current iterations of cords use PVC to insulate the metal wires. With the advent of battery technology, the word “cordless” became a popular term in 1905, specifically with regards to items that were battery-powered.

The first recorded use of the expression “cut the cord” was in 1950 in Roosevelt in Retrospect, a book by John Gunther. In that work, Gunther wrote, “Step by step, little by little, FDR became free [of his mother’s influence.] In a sense, it was the paralysis that cut the cord.” This usage referred to a mother’s umbilical cord, and a metaphorical attachment to a child later in life. A similar idiom, “cutting the apron strings,” also refers to an extended (sometimes unhealthy) attachment to a mother. 

Idioms tend to have a life of their own, though, and while the original usage was related to mothers and children, the wording was too perfect not to be imbued with a stronger technological connotation in the 21st century. This was due to the very literal shift away from the cords of physical electronics. When somebody cancels their cable subscription and signs up for a streaming service, or finally unplugs that last telephone jack, that person has “cut the cord.”

Featured image credit: GoodLifeStudio/ iStock
Advertisement