Found 78,199 Resources
William Shakespeare knew his way around a map—just look at how King Lear divides his kingdom into three parts, creating chaos while he pursues his “darker purpose.” But what did the world look like when the Bard still walked the earth? An exhibition at the Boston Public Library celebrates the 400th anniversary of Shakespeare’s death through historical maps. The play might be the thing for Shakespeare, but these maps, Linda Poon reports for CityLab, shed light on the playwright’s unique perspective and how he created drama for 16th-century theatergoers.
Shakespeare Here and Everywhere, which can be viewed at the Norman B. Leventhal Map Center at the Boston Public Library through February 26, 2017, uses maps to show how Shakespeare thought of far-off worlds. Though he was based in England, the Bard often used foreign settings to create exotic stories—and thanks to the development of maps and atlases during his era, he was able to elevate what amounted to armchair traveling into fine art.
International travel was treacherous and expensive during Shakespeare’s day, so it’s not surprising that neither he nor many of his contemporaries ever left England. But in a time before TV or the internet, maps were a source not just of coveted information, but of entertainment. As the British Museum notes, to own or look at a map meant the viewer was literally worldly, and atlases and wall maps were used not as ways of navigating places most people would never encounter, but as symbols of education and adventure.
Can’t make it to Boston? Do some armchair traveling of your own: You can view the maps in the exhibition on the library’s website. Or explore the locales mentioned in Shakespeare’s plays with Shakespeare on the Map, a project that uses Google Maps to show how the playwright used location.
Editor's note, December 6, 2016: The piece has been updated to reflect that the Norman B. Leventhal Map Center is an independent organization located at the Boston Public Library.
Compiled by Matsudaira Sadanobu. Cf. Kokusho sōmokuroku.
880-04 Imprint information from: Watanabe Shijuku Bunko shūzōhin mokuroku (http://www.i-berry.ne.jp/~nantray/bunko03.htm).
Probably 1899 reprint, using original woodblock, of the first published in 85 volumes, in1800. Cf. Nihon koten bungaku d.j.
On double leaves, in fukurotoji style.
Date from preface.
In gajōsō style.
Also available online.
Artist name from Kokusho sōmokuroku.
Reprint. Originally published in 1827.
Vols. 2 and 3 of 3-volume set.
Sequel to the previously published in 1805.
On double leaves, fukurotoji style.
Kokusho sōmokuroku. vol. 2. Lists the title as "Kachō shashin zue."
Also available online.
In ordinary visible light, this cluster of galaxies doesn’t look like much. There are bigger clusters with larger and more dramatic-looking galaxies in them. But there’s more to this image than galaxies, even in visible light. The gravity from the cluster magnifies and distorts light passing near it, and mapping that distortion reveals something about a substance ordinarily hidden from us: dark matter.
This collection of galaxies is famously called the “Bullet Cluster,” and the dark matter inside it was detected through a method called “weak gravitational lensing.” By tracking distortions in light as it passes through the cluster, astronomers can create a sort of topographical map of the mass in the cluster, where the “hills” are places of strong gravity and “valleys” are places of weak gravity. The reason dark matter—the mysterious substance that makes up most of the mass in the universe—is so hard to study is because it doesn’t emit or absorb light. But it does have gravity, and thus it shows up in a topographical map of this kind.
The Bullet Cluster is one of the best places to see the effects of dark matter, but it’s only one object. Much of the real power of weak gravitational lensing involves looking at thousands or millions of galaxies covering large patches of the sky.
To do that, we need big telescopes capable of mapping the cosmos in detail. One of these is the Large Synoptic Survey Telescope (LSST), which is under construction in Chile, and should begin operations in 2022 and run until 2032. It’s an ambitious project that will ultimately create a topographical map of the universe.
“[LSST] is going to observe roughly half of the sky over a ten-year period,” says LSST deputy director Beth Willman. The observatory has “a broad range of science goals, from dark energy and weak [gravitational] lensing, to studying the solar system, to studying the Milky Way, to studying how the night sky changes with time.”Artist’s rendering of the Large Synoptic Survey Telescope, currently under construction in Chile (Michael Mullen Design, LSST Corporation)
To study the structure of the universe, astronomers employ two basic strategies: going deep, and going wide. The Hubble Space Telescope, for example, is good at going deep: its design lets it look for some of the faintest galaxies in the cosmos. LSST, on the other hand, will go wide.
“The size of the telescope itself isn't remarkable,” says Willman. LSST will be 27 feet in diameter, which puts it in the middle range of existing telescopes. “The unique part of LSST's instrumentation is the field of view of [its] camera that's going to be put on it, which is roughly 40 times the size of the full moon.” By contrast, a normal telescope the same size as LSST would view a patch of the sky less than one-quarter of the moon’s size.
In other words, LSST will combine the kind of big-picture image of the sky you’d get by using a normal digital camera, with the depth of vision provided by a big telescope. The combination will be breathtaking, and it’s all due to the telescope’s unique design.
LSST will employ three large mirrors, where most other large telescopes use two mirrors. (It’s impossible to make lenses as large as astronomers need, so most observatories use mirrors, which can technically be built to any size.) Those mirrors are designed to focus as much light as possible onto the camera, which will be a whopping 63 inches across, with 3.2 billion pixels.
Willman says, “Once it's put together and deployed onto the sky, it will be the largest camera being used for astronomical optical observations.”
While ordinary cameras are designed to recreate the colors and light levels that can be perceived by the human eye, LSST’s camera will “see” five colors. Some of those colors overlap those seen by the retinal cells in our eyes, but they also include light in the infrared and ultraviolet part of the spectrum.
After the Big Bang, the universe was a hot mess—of particles. Soon, that quagmire cooled and expanded to the point where the particles could begin attracting each other, sticking together to form the first stars and galaxies and forming a huge cosmic web. The junctions of which grew into large galaxy clusters, linked by long thin filaments, and separated by mostly-empty voids. At least that’s our best guess, according to computer simulations that show how dark matter should clump together under the pull of gravity.
Weak gravitational lensing turns out to be a really good way to test these simulations. Albert Einstein showed mathematically that gravity affects the path of light, pulling it slightly out of its straight-line motion. In 1919, British astronomer Arthur Eddington and his colleagues successfully measured this effect, in what was the first major triumph for Einstein’s theory of general relativity.
The amount light bends depends on the strength of the gravitational field it encounters, which is governed by the source’s mass, size and shape. In cosmic terms, the sun is small and low in mass, so it nudges light by only a small amount. But galaxies have billions and billions of stars, and galaxy clusters like the Bullet Cluster consist of hundreds or thousands of galaxies, along with plenty of hot plasma and extra dark matter holding them all together and the cumulative affect on light can be quite significant. (Fun fact: Einstein didn’t think lensing would actually be useful, since he only thought of it in terms of stars, not galaxies.)A dark matter map, created by Japanese astronomers using weak lensing (Satoshi Miyazaki, et al.)
Strong gravitational lensing is produced by very massive objects that take up relatively little space; an object with the same mass but spread out over a larger volume will still deflect light, but not as dramatically. That’s weak gravitational lensing—usually just called “weak lensing”—in essence.
Every direction you look in the universe, you see lots of galaxies. The most distant galaxies may be too faint to see, but we still see some of their light filtering through as background light. When that light reaches a closer galaxy or galaxy cluster on its way to Earth, weak lensing will make that light a little brighter. This is a small effect (that’s why we say “weak”, after all), but astronomers can use it to map the mass in the universe.
The 100 billion or so galaxies in the observable universe provide a lot of opportunities for weak lensing, and that’s where observatories like LSST come in. Unlike most other observatories, LSST will survey large patches of the sky in a set pattern, rather than letting individual astronomers dictate where the telescope points. In this way it resembles the Sloan Digital Sky Survey (SDSS), the pioneering observatory that has been a boon to astronomers for nearly 20 years.
A major goal of projects like SDSS and LSST is a census of the galactic population. How many galaxies are out there, and how massive are they? Are they randomly scattered across the sky, or do they fall into patterns? Are the apparent voids real—that is, places with few or no galaxies at all?
The number and distribution of galaxies gives information about the biggest cosmic mysteries. For example, the same computer simulations that describe the cosmic web tell us we should be seeing more small galaxies than show up in our telescopes, and weak lensing can help us find them.
Additionally, mapping galaxies is one guide to dark energy, the name we give the accelerating expansion of the universe. If dark energy has been constant all the time, or if it has different strengths in different places and times, the cosmic web should reflect that. In other words, the topographical map from weak lensing may help us answer one of the biggest questions of all: just what is dark energy?
Finally, weak lensing could help us with the lowest-mass particles we know: neutrinos. These fast-moving particles don’t stick around in galaxies as they form, but they carry away energy and mass as they go. If they take away too much, galaxies don’t grow as big, so weak lensing surveys could help us figure out how much mass neutrinos have.
Like SDSS, LSST will release its data to astronomers regardless of whether they’re members of the collaboration, enabling any interested scientist to use it in their research.
“Running the telescope in survey mode, and then getting those extensive high-level calibrated data products out to the entire scientific community are really gonna combine to make LSST be the most productive facility in the history of astronomy,” says Willman. “That's what I'm aiming for anyway.”
The power of astronomy is using interesting ideas—even ones we once thought wouldn’t be useful—in unexpected ways. Weak lensing gives us an indirect way to see invisible or very tiny things. For something called “weak,” weak lensing is a strong ally in our quest to understand the universe.
Unschooling—child-directed learning—is “the final and most extreme frontier in the broader cultural shift toward 'child-centred' parenting,” says the Globe and Mail. Unlike more traditional homeschooling, in which parents "try to replicate the formal curriculum of the school system in the home," says University Affairs, unschooling "encourages kids to do pretty much whatever they want with their time.”
The idea is that children are, by default, keen learners. If something strikes their passions, the thinking goes, kids will pursue it to the end, picking up intellectual skills and self-motivation as they go.
The question that's always posed to unschooling is whether kids who learn in this way are set up to succeed when confronted by the structured, organized, hierarchical society that awaits. According to new research, described by Luba Vangelova for KQED, it seems that—contrary to what skeptics might assume—unschooled kids do just fine when transitioning to more traditional colleges.
Almost half of those had either completed a bachelor’s degree or higher, or were currently enrolled in such a program; they attended (or had graduated from) a wide range of colleges, from Ivy League universities to state universities and smaller liberal-arts colleges.
According to KQED, though the path from unschooling to college isn't as streamlined as for kids who go to regular school, it isn't that difficult to tread, either. Aside from a few administrative hurdles, unschooled students didn't face immediate barriers in college:
Getting into college was typically a fairly smooth process for this group; they adjusted to the academics fairly easily, quickly picking up skills such as class note-taking or essay composition; and most felt at a distinct advantage due to their high self-motivation and capacity for self-direction.
Kids who are unschooled pretty much by definition won't get as broad of a baseline education as kids in the traditional school system, though. Unschooling lends itself to deep dives, to kids getting passionately and heavily invested in a sphere of innate interest. One of the main critiques of unschooling, says University Affairs, is that experiential learning doesn't lend itself to the broad range of intellectual pursuits available to the human race. And, says KQED, unschooled kids did report having trouble with math and, as a group, disproportionately favored careers in the "creative arts."
Many of the unschooled kids, however, did follow their passions into technical fields: “half of the men and about 20 percent of the women,” says KQED, went in to fields that required a substantial background in science, technology or math.
The United States was in part shaped by the dreams and contributions of immigrants who sought a better life for themselves and their families. Thanks […]
The post “Tracing American Journeys” Chronicles Experiences of 17 Immigrant Entrepreneurs appeared first on Smithsonian Insider.
At 2 p.m. on February 16, 1968, a special red telephone rang at the police station in Haleyville, Alabama. Rather than a police officer, U.S. Congressman Tom Bevill answered the call. On the other end of the line was Alabama Speaker of the House Rankin Fite, calling from the mayor’s office (actually located in another part of the same building). Bevill’s simple answer of “hello” may not rank alongside Samuel Morse’s “What hath God wrought,” but it ushered in an important part of daily life, one that has saved countless American lives over the past 50 years. The call marked the first use of the emergency number 9-1-1, a technological answer to a life-and-death question—how do you get help quickly in the event of an emergency? Americans wrestling with the problem have experimented with many innovative solutions over the years.
In the late 18th and early 19th centuries, getting to the scene of a fire as quickly as possible was the best defense against a damaging conflagration. Just as today, time was of the essence. Watchmen would alert the populace with wooden rattles and raise the alarm by shouting through the streets (sometimes known as “hallooing fire”). Citizens and volunteer firefighters alike would grab leather buckets, hooks, axes, and other necessary equipment and head in the direction of the clamor. A simple fire pumper might be drawn by hand to the scene as well. But finding a fire fast, especially in a warren of urban streets, could be difficult.
The citizens of Philadelphia tried one solution when they restored the steeple of the Pennsylvania State House (better known as Independence Hall) in 1828. They hung a new bell and put a watchman on duty to keep a lookout for fires. Franklin Peale, son of painter Charles Willson Peale, suggested an alarm system for the new bell that would direct fire companies to the scene of a blaze. In the event of a fire near the State House itself, the bell in the steeple was rung continuously. One peal at regular intervals indicated a fire to the north, two peals meant a fire to the south, three to the east, four to the west, and so on. This system is preserved in the decoration on the top of a fire hat from Philadelphia in the museum collections. A compass rose, with a bell at the center, displays the alarm code. Bell codes were used in other cities as well, like New York. In Boston, the city was divided into fire districts, and church bells would peal the number of a district where a fire was discovered. However, the 19th century saw American cities growing in size and population, and a better system was needed to pinpoint the location of an emergency.
William F. Channing and Moses Farmer were both obsessed with the potential for electromagnetism and telegraphy. Specifically, both believed it could be harnessed to create a reliable and near-instantaneous fire alarm system throughout the city of Boston. The two collaborated to lobby city officials to fund “the Application of the Electric Telegraph to signalizing Alarms of Fire” (as their presentation was titled) and received $10,000 to develop and establish their system.
After running nearly 50 miles of wire throughout the city, connected to dozens of alarm boxes and bells, Channing and Farmer’s system was ready in the spring of 1852. If someone opened an alarm box and turned a small crank, the special-purpose telegraph would send out a pulsating electric current to electromagnets that pulled and released the bell clappers, producing alarms both at the scene of the emergency and at the central station, where the location was recorded. The first attempt by the public to use the system was on April 29, 1852. Unfortunately, the helpful citizen cranked too fast, such that the message could not be read, and the man had to run to the central signal office to alert them of the fire in person. Nevertheless, Channing and Farmer would continue to refine their system, and within months it proved a reliable tool in raising the alarm in Boston.
Channing and Farmer made a joint application for a patent for their system, and a patent was issued on May 19, 1857 (Patent No. 17355). Their patent model resides today in the Electricity Collections here at the museum, along with earlier prototypes.
It was at a Smithsonian Institution lecture in March 1855 that emergency alarms took another step. At that lecture, William Channing described the details and merits of the Channing and Farmer system, humbly noting theirs was “a higher system of municipal organization than any which has heretofore been proposed or adopted.” Despite this lofty claim, both men had failed to sell their system to other cities and municipalities, and Channing was falling into debt.
Attending the lecture was John Nelson Gamewell, a postmaster and telegraph operator from Camden, South Carolina. Seeing an opportunity, Gamewell raised the funds to buy the rights to market the Channing and Farmer system. Beginning in 1856, he sold the system to several American cities, including New Orleans, St. Louis, and Philadelphia. By 1859 Gamewell obtained the full rights and patents to the system and was on the verge of creating a fire alarm empire when the Civil War broke out. The U.S. government seized the patents from the Confederate Gamewell, and John Kennard, a fire official from Boston, bought them on the cheap in 1867.
After the war, Gamewell moved north and partnered with Kennard to create a new company to manufacture and sell fire alarms. Building on their success, Gamewell established the Gamewell Fire Alarm Telegraph Company, and its logo—a fist holding a clutch of lightning bolts—would soon be found on alarm boxes throughout North America. By 1890 Gamewell systems were installed in nearly 500 cities in the United States and Canada.
While Gamewell boxes became a common sight on public streets and buildings in the early 20th century, more and more Americans were installing a new device in their homes and businesses: the telephone. Before the advent of rotary dial phones (ask your parents, kids), all calls went through with operator assistance, and emergency calls could be directed to the appropriate party. With dial service, a person with an emergency had to call direct to their local police station, hospital, or fire department. Experiments with a universal emergency number in the UK in the 1930s prompted the National Association of Fire Chiefs to recommend such a system for the United States in 1957. On January 12, 1968, after a decade of study and debate and presidential commissions, the Federal Communications Commission and AT&T announced the selection of 9-1-1 as a national emergency number. One FCC member boasted at the time that 911 would be better remembered than 007.
The number was indeed easy to remember, quick to dial when needed, particularly on rotary phones (did you ask?), and difficult to dial in error. AT&T had already established special three-digit numbers—4-1-1 for directory assistance and 6-1-1 for customer service—so the new emergency number fit the existing system.
Some 2,000 independent phone companies in the United States had been left out of the decision, many preferring “0” as the standard number. Nevertheless, one such company decided get behind 9-1-1 in a big way. Bob Gallagher, the president of the Alabama Telephone Company (ATC), decided his company would beat “Ma Bell” to the punch. ATC staff picked Haleyville as the best location and worked after hours to design and implement the infrastructure. Almost exactly one month after AT&T’s announcement, Speaker Fite and Congressman Bevill spoke over the first dedicated 9-1-1 line. Nome, Alaska, would debut a 9-1-1 system about a week later.
It would take time for the system to grow in the United States, so publicity like that which surrounded the Haleyville call helped to spread the idea. Twenty years later, only half the U.S. population had access to a 9-1-1 system. By the end of the last century, that number had grown to well over 90%. Today an estimated 240 million calls a year are made to 9-1-1. Upwards of 80% of these calls now come from wireless devices, something almost impossible to consider 50 years ago, just as the watchman with a wooden rattle might not envision an alarm traveling over electrical wires.
Tim Winkle is the deputy chair of the Division of Home and Community Life and the curator of the Firefighting and Law Enforcement Collection.
One hundred years after the extinction of the passenger pigeon, the nation’s top bird science and conservation groups have come together to publish The State […]
The post “The State of the Birds” assesses health of nation’s birds appeared first on Smithsonian Insider.
After Marcia Wallace passed away last month, “The Simpsons” lost one of its characters, the 4th grade teacher Edna Krabappel, whose voice Wallace had provided for years. Mrs. Krabappel probably spent more time cynically cackling in the classroom than teaching math—but she wasn’t the only source of math lessons on the best cartoon television series ever to run. Several writers for The Simpsons, including Al Jean, J. Stewart Burns, Jeff Westbrook, and David X. Cohen, completed degrees in math and physics before they turned to screenwriting, Wired reports. And, ever faithful to their academic roots, those writers have found numerous ways to sneak in mini math lessons in various Simpsons episodes over the years, thanks to a variety of nerdy, clueless and informative characters.
A new book, The Simpsons and Their Mathematical Secrets, takes a deep dive into the math, physics and astronomy specifics of the show, but here are just a few examples, courtesy of Wired:
- “Treehouse of Horror VI: Homer 3″ (1995): Homer gets sucked into the third dimension, giving viewers a lesson on depth.
- “The Wizard of Evergreen Terrace” (1998): Homer’s notes include formulas for the then-elusive Higgs boson, the density of the universe and the geometry of donuts.
- “They Saved Lisa’s Brain” (1999): Physicist Stephen Hawking compliments Homer’s donut-shaped universe theory–a serious hypothesis among astronomers.
- “Bye Bye Nerdie” (2001): Professor Frink parrots a real-life proposal from 1897 to round Pi down to 3.
- “Bart the Genius” (1990): Bart has nightmares about the the trains-traveling-at-different-speeds question.
- “Marge in Chains” (1993): A convenience store owner can recite π to its 40,000th digit.
- “Bart the Genius” (1990): Bart struggles to understand why the answer to the calculus problem y = (r3)/3 is worthy of interest.
More from Smithsonian.com:
Edvard Munch’s “The Scream” is iconic—but it's also mysterious. Why is the stressed-out subject screaming, anyway? A Norwegian scientist has an intriguing new theory, reports the BBC’s Jonathan Amos: Perhaps the scream was inspired by an atmospheric phenomenon called mother-of-pearl clouds.
The rare clouds got their nickname from the abalone shells they resemble. Also known as nacreous or polar stratospheric clouds, they’re iridescent and pretty unusual. They form in northerly latitudes during the winter when the dry stratosphere cools down.
Normally, the stratosphere is so dry that it can’t sustain clouds, but when temperatures get beneath about 108 degrees below zero, all of the scant moisture in the air gets chilly enough to form ice crystals. When the sun hits the perfect place along the horizon, those ice crystals reflect its rays, causing a shimmering, pearly effect.
Helene Muri, a meteorologist and cloud expert, recently gave a talk at this year's European Geosciences Union General Assembly about how the wavy mother-of-pearl clouds could be portrayed in Munch’s painting. “As an artist, they no doubt could have made an impression on him,” she tells Amos.The clouds form in icy temperatures and can be viewed only at certain latitudes and times of day. (Wikimedia Commons)
Though the sky in "The Scream" is outlandish, the painting is widely believed to be autobiographical. Munch himself struggled with tragedy and fragile health that scholars believe could have informed the painting’s colors and themes. In a poem in his diary, Munch recalls the sky turning “blood red” after he felt “a wave of sadness” while walking with some friends. He put a similar poem on the frame of one of his versions of the painting.
That description has prompted other scientists to use natural phenomena to explain the origin of the painting. In 2004, physicists theorized that the clouds were created when Krakatoa erupted in Indonesia—an event that caused spectacular sunsets throughout Europe. But it’s tricky to ascribe a particular date, time, or event to a piece of art, especially since painting is by nature so subjective.
It turns out that mother-of-pearl clouds have a dark side: As Nathan Case explains for The Conversation, they cause the ozone layer to further break down by stoking a reaction that produces free radicals, which can destroy atmospheric ozone. That’s something to scream about—but until scientists invent artistic time machines, their theories about the weather events that precipitated history’s greatest paintings will remain mere suppositions.
Congratulations to Leslie Overstreet! The Catesby Commemorative Trust’s The Curious Mr. Catesby: A “Truly Ingenious” Naturalist Explores New Worlds book has been awarded the 2016 Annual Literature Award by the Council of Botanical and Horticultural Libraries. Leslie, Curator of Natural-History Rare Books in the Joseph F. Cullman 3rd Library of Natural History, authored the chapter more »
The post “The Curious Mr. Catesby” Receives 2016 Annual Literature Award appeared first on Smithsonian Libraries Unbound.