Found 2,035 Resources containing: Art and computers
Computer art and graphics
33-cent mint single
Issued May 2, 2000
Transcript: 66 pages
An interview of Gyöngy Laky conducted 2007 December 11-12, by Mija Riedel, for the Archives of American Art's Nanette L. Laitman Documentation Project for Craft and Decorative Arts in America, at Laky's home and studio, in San Francisco, California.
Laky speaks of her recent exhibitions; leaving Hungary as a child; using words in art; learning languages; family influences in her art; the family art gallery and Chinese painting; changing majors in college; working with various materials; using recycled materials in her work; retirement; planning her works; working with assistants; working with a small community in Europe; construction of her works; using computers to create art; the craft "renaissance"; scale and outdoor projects; working with dealers and commissioned pieces; emphasis on negative space. Laky also recalls Emile Lahner, Mary Dumas, Ed Rossbach, Judy Foosaner, Peter Voulkos, Joanne Branford, Lillian Elliott, Henry Miller, Louise Nevelson, Darryl Dobras, Brett Christiansen, Kim Ocampo, Jack Lenor Larsen, Martin Puryear, Ann Hamilton, Suzi Gablik, Susan Sontag, and others.
Computers are getting better at some surprisingly human tasks. Machines can now write novels (though they still aren’t great), read a person’s pain in their grimace, hunt for fossils and even teach each other. And now that museums have digitized much of their collections, artificial intelligence has access to the world of fine art.
That makes the newest art historians on the block computers, according to an article at MIT Technology Review.
Computer scientists Babak Saleh and Ahmed Egammal of Rutgers University in New Jersey have trained an algorithm to look at paintings and detect the works’ genre (landscape, portrait, sketch, etc.), style (Abstract Impressionism, Baroque, Cubism, etc.) and artist. By tapping into the history of art and the latest machine learning approaches the algorithm can draw connections that had only been made by human brains before.
To train their algorithm, researchers used the more than 80, 000 images from WikiArt.org, one of the largest online collections of digital art. The researchers use this bank of art to teach the algorithm how to key in on specific features, such as color and texture, slowly building a model that describes unique elements in the different styles (or genres or artists). The end product can also pick out object within the paintings such as horses, men or crosses.
Once it was schooled, the researchers gave their newly-trained algorithm paintings it had never seen before. It was able to name the artist in over 60 percent of the new paintings, and identify the style in 45 percent. Saleh and Elgammal reported their findings at arXiv.org.
The algorithm could still use some tweaking — but some of the mistakes it made are similar to those a human might make. Here’s MIT Technology Review:
For example, Saleh and Elgammal say their new approach finds it hard to distinguish between works painted by Camille Pissarro and Claude Monet. But a little research on these artists quickly reveals both were active in France in the late 19th and early 20th centuries and that both attended the Académie Suisse in Paris. An expert might also know that Pissarro and Monet were good friends and shared many experiences that informed their art. So the fact that their work is similar is no surprise.
The algorithm makes other connections like this one—connecting expressionism and fauvism, and mannerism with the Renassance styles that were borne out of mannerism. These connections themselves aren’t new discoveries for the art world. But the machine figured them out in just a few months of work. And in the future the computer could uncover some more novel insights. Or, in the nearer future, a machine algorithm able to classify and group large numbers of paintings will help curators manage their digital collections.
While the machines don’t seem to be replacing flesh-and-blood art historians in the near future, these efforts really are mearly the first fumbling steps of a newborn algorithm.
Transcript: 52 pages
An interview with Clark Richert conducted 2013 August 20- 21, by Elissa Auther, for the Archives of American Art's, Stoddard-Fleischman Fund for the History of Rocky Mountain Area Artists, at the Museum of Contemporary Art in Denver, Colorado.
Richert speaks of deciding to become an artist; his influences; studying art at the University of Kansas; the Wichita Vortex; Droppings: working with geometric art; Drop City; geodesic domes; The Ultimate Painting; Zome Toy; Criss-Cross; using computers in art; being a teacher; the Armory Group; and A.R.E.A. Richert also recalls Bruce Conner, Richard Kallweit, Joan Brown, Gene Bernofsky, Buckminster Fuller, Michael McClure, Alan Kaprow, Nick Sands, Robert Rauschenberg, Mark Rothko, Jay Defeo, Dean Fleming, Linda Fleming, John Fudge, and others.
Accompanied by a typewritten sheet on which Hammersley describes his process for making this computer art (description dated 1992 July 11).
View 3 shows the two cards laid on top of one another.
Each November, hundreds of thousands of writers take part in National Novel Writing Month (NaNoWriMo)—the goal of which is to pump out a 50,000 word novel in one month. But this year and last, some creative types took a different tack to getting novels made. Rather than bleeding their souls on to the page, some aspiring authors with coding savvy used computers to do the writing for them, says the Verge.
Known as National Novel Generation Month, or NaNoGenMo, the spin-off event saw programers work to write code that would, in turn, write a novel.
Last year, says the Stranger, the results were often disjointed, robotic scripts. Yet some of the computer-generated novels were published, says the Verge, including one by MIT professor Nick Montfort.
“[R]eading an entire generated novel is more a feat of endurance than a testament to the quality of the story, which tends to be choppy, flat, or incoherent by the standards of human writing,” says the Verge. But there's no guarantee of quality in NaNoWriMo proper, either, and there's probably less risk of emergent cryptozoological erotica.
Of the computer-generated novels, says the Stranger, “[s]ome of them seem virtually indistinguishable from a certain kind of contemporary novel, à la Tao Lin. Others read remarkably like a sentient person's dream journal.”
Creative and artistic feats are often seen as the last refuge for human endeavor from the coming robot apocalypse. But if NaNoGenMo gains a foothold and improves, at least we'll all be well entertained in our unemployment.
This 21msp50/55/56 digital signal processor chip was created by Analog Devices Incorporated around 1994. The chip contains an image of a fire-breathing Godzilla.
Last year, a group of German computer scientists made waves by demonstrating a new computer algorithm that could transform any digital still image into artwork mimicking the painterly styles of masters like Vincent van Gogh, Pablo Picasso, and Edvard Munch. Though an impressive feat, applying the same technique to moving images seemed outrageous at the time. But now, another group of researchers have figured it out, quickly and seamlessly producing moving digital masterpieces, Carl Engelking writes for Discover.
In a video demonstration, the programmers show off their algorithm’s artistic abilities by transforming scenes from movies and television shows like Ice Age and Miss Marple into painting-like animations with the click of a mouse. But developing the algorithm was no small feat.
To create such a detailed transformation, computer scientist Leon Gatys and his colleagues at the University of Tübingen developed a deep-learning algorithm that runs off an artificial neural network. By mimicking the ways that neurons in the human brain make connections, these machine learning systems can perform much more complicated tasks than any old laptop.
Here’s how it works: when you’re looking at a picture of a painting or watching a movie on your laptop, you’re witnessing your computer decode the information in a file and present it in the proper manner. But when these images are processed through a neural network, the computer is able to take the many different layers of information contained in these files and pick them apart piece by piece.
For example, one layer might contain the information for the basic colors in van Gogh’s Starry Night, while the next adds a little more detail and texture, and so on, according to the MIT Technology Review. The system can then alter each different layers individually before putting them back together to create a whole new image.
“We can manipulate both representations independently to produce new, perceptually meaningful images.” Gatys wrote in a study published to the prepress arXiv server.
By applying this system of layer-based learning to paintings by Picasso and van Gogh, to name a few, the researchers were able to develop an algorithm that “taught” the computer to interpret all this information in a way that separates the content of a painting from its style. Once it understood how van Gogh used brushstrokes and color, it could then apply that style like a Photoshop filter to an image and effectively recreate it in his iconic style, Matt McFarland wrote for the Washington Post. But applying this technique to video presented a whole new set of problems.
“In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time,” Manuel Ruder and his team from the University of Freiburg write in their new study, also published on arXiv. “Doing this for a video sequence single-handed was beyond imagination.”
When Ruder and his colleagues first tried applying the algorithm to videos, the computer churned out gobbledygook. Eventually, they realized that the program was treating each frame of the video as a separate still image, which caused the video to flicker erratically. To get past this issue, the researchers put constraints on the algorithm that kept the computer from deviating too much between frames, Engelking writes. That allowed the program to settle down and apply a consistent style across the entire video.
The algorithm isn’t perfect and often has trouble handling larger and faster motion. However, this still represents an important step forward in the ways computers can render and alter video. While it is in its early stages, future algorithms might be able to apply this effect to videos taken through a smartphone app, or even be render virtual reality versions of your favorite paintings, the MIT Technology Review reports.
The idea of boiling down an artist’s style to a set of data points may rankle some people, it also opens the doors to all new kinds of art never before believed possible.
Transcript: 109 pages
An interview of Boris Bally conducted 2009 May 26-27, by Mija Riedel, for the Archives of American Art's Nanette L. Laitman Documentation Project for Craft and Decorative Arts in America, at Bally's home and studio, in Providence, Rhode Island.
The artists speaks of his current studio in Providence, Rhode Island; working without a studio assistant; the benefits of working with studio assistants without an art-school background; apprenticing with Swiss metalsmith Alexander Schaffner when Bally was 19; his own de facto apprenticeship program with his studio assistants; his parents as role models; his vision at age 19 for his career plan; his early interest in CAD; growing up with Swiss-born parents, both with art/design backgrounds; visiting Switzerland as a child; his father's studies with Buckminster Fuller in the late 1950s; his mother's class with L. Brent Kington, whom Bally later studied with; growing up in Pittsburgh, Pennsylvania; his first home metal shop at nine years old; his first formal metal class at about 14 years old; making and selling jewelry throughout his teens; informal apprenticeship with Jeff Whisner; his father's design firm, launched in his last year of high school; summer studying at the Pennsylvania Governor's School for the Arts; year-long apprenticeship in Switzerland; watching Schaffner make and sell a wide variety of objects, which later informed Bally's own perspective; his continuing relationship with Schaffner; undergraduate studies at Tyler School of Art, Philadelphia, Pennsylvania; studying with Daniella Kerner and Vickie Sedman at Tyler; transferring to Carnegie Mellon University, Pittsburgh, Pennsylvania, to study with Carol Kumata; making a "happiness machine"; transition from jewelry to larger sculptures; using found and scavenged materials; meeting Rosemary Gialamas (Roy) and their eventual elopement; moving to the Boston area; work as an industrial design model-maker; the New York art scene of the 1980s; representation with Archetype Gallery, New York, New York; slow but steady artistic recognition and commercial success of his functional objects; Sliding Perfections, flatware; teaching Gialamas metalsmithing and collaborative works by the two; early teaching experience in adult education classes in Cambridge, Massachusetts, then at Massachusetts College of Art, Boston; return to Pittsburgh in 1989, where Bally took a teaching position at Carnegie Mellon in the design department; studio on Bigelow Boulevard; difficulties in his marriage; a commission from the Society of Arts and Crafts, Boston, Massachusetts, and the beginnings of his traffic sign pieces in a collaborative piece with Gialamas; starting his platters series; the dissolution of his marriage to Gialamas in 1993; meeting Lynn, whom he later married; his love of teaching and his teaching philosophy; teaching at Penland School of Crafts, Penland, North Carolina; move to Providence, Rhode Island, to devote his time to studio work; the pros and cons of craft and arts schools versus university settings; the intersection of art, design, and industry: his Humanufactured line of products; functional work in the late '80s, and the influence of a trip to Haiti in the 1980s; bottle cork pieces; Trirod vessels; "More than One: Contemporary Studio Production" exhibition, American Craft Museum, New York, New York, 1992-94; philosophy of making; working in series form; truss pieces; perforation pieces and Vessel with a Silver Heart (1993); armform series; "Jewelries, Epiphanies" exhibition, Artists Foundation Gallery at Cityplace, Boston, Massachusetts, 1990; inclusion in One of a Kind: American Art Jewelry Today, by Susan Grant Lewin. (New York, NY: Harry N. Abrams, 1994); series Dig Wear and Eat Wear bracelets; Calimbo vessel and the Fortunoff prize; gold Tread Wear brooches in the mid-1990s; creating his first chair; moving from hand-made solo work to furniture and a design and production focus; starting to patent his designs in the mid-1990s; further exploration of design and technique in his chairs; "GlassWear: Glass in Contemporary Jewelry," Museum of Arts and Design, New York, New York, 2009; Pistol Chalice and work with the Pittsburgh gun buyback program; traveling exhibition for the project; Gun Totem; Brave necklace; BroadWay armchair; Subway chair; new techniques for graphics on the furniture; his relationship with former scrapyard Paul Warhola, brother to Andy Warhol; commission work, and the importance of commerce in his career and worldview; commission for Comedy Central television network; the changing craft market and the boom times of the 1980s; work with galleries, including: Patina, Santa Fe, New Mexico; Velvet da Vinci, San Francisco, California; Snyderman-Works, Philadelphia, Pennsylvania; Nancy Sachs Gallery, St. Louis, Missouri; the Society of Arts and Crafts, Boston, Massachusetts; seeing one of his pieces used on a set for a daytime television soap opera and in the movie Sex and the City ; the recent "green" (environmentally conscious) trend; blurring boundaries of design and art and craft; growing acceptance of artist-made and -designed multiples; pros and cons of computer technology in art and craft; the pros and cons of the DIY (do-it-yourself) craft movement; influential writers, including Rosanne Raab, Marjorie Simon, Steven Skov Holt and Mara Holt Skov, Bruce Metcalf, Toni Greenbaum, Matthew Kangas, Gail Brown; his involvement in the Society of North American Goldsmiths; making metal benches for his children. He also recalls Heather Guidero, Julian Jetten, Pam Moloughney, Dennis Kowal, Ursula Ilse-Neuman, Bob Ebendorf, Jason Spencer, Rob Brandegee and Ava DeMarco, Stefan Gougherty, Flo Delgado, L. Brent Kington, Curtis Aric, Ralph Düby, Steve Korpa, Joe Wood, Joe Ballay, Yves Thomann, Andy Caderas, James Thurman, Nicholas (Nico) Bally, Elena Gialamas, James Gialamas, Elvira Peake, Ronald McNeish, Johanna Dahm, Jerry Bennet, Kathleen Mulcahy, Nelson Maniscalco, Tom Mann, Otto Künzli, Stanley Lechtzin, Christopher Shellhammer, David Tisdale, Dean Powell, Daniel Carner, Donald Brecker, Robert Schroeder Phil Carrizzi, Lucy Stewart, Elisabeth Agro, Rachel Layton, Sarah Nichols, Peter Nassoit, Dan Niebels, Mary Carothers, Ward Wallau, Ivan Barnett and Alison Buchsbaum, Jonathan Bonner, Raymond and Patsy Nasher, Beth Gerstein, George Summers Jr., Pavel Opocensky, Buddy Cianci, David Cicilline.
Prosthetics are largely built to look and function like the limb they’re replacing. But it need not be so. Running prosthetics for lower-leg amputees are more like curved metal springs than the legs they replace. And now, a group of students in Germany are working on a digital hand prosthetic that will allow users direct control of a computer.
Operating a mouse or trackpad with a traditional prosthetic is challenging, enough so that the common practice is to learn to work with the opposite hand. David Kaltenbach, Lucas Rex and Maximilian Mahal, students of design at the Berlin Weissensee School of Art, have prototyped a new device that tracks gestures in an amputated limb and translates them to computer commands—scroll, click, right-click.
“If you’re in an office job, you have to deal with computers, and if you’re missing your hand … then it’s obviously very inconvenient to use a desktop computer, and there’s no real solution to that,” says Rex.
Most upper extremity amputations are due to work injuries, and most of those are in a job that relies on the hands, says Uli Maier, a certified prosthetist and orthotist at Ottobock, a German company that produces prosthetics. “If you lose them, you’re out of your job, so you have to change your life totally. And you have to find a job where you can work with one hand, and these jobs are mostly in offices,” says Maier. “Just try to work one day with only one hand on your computer and you will see what I’m talking about.”
Maier visited the class that Kaltenbach, Rex and Mahal were a part of, lecturing on prosthetics and Ottobock’s programs. He helped the students conceive of the project, which they call Shortcut, based on his experience as a technician in patient care. “This is necessary for amputees of the upper extremities, and the things existing on the market are horrible,” says Maier.
Image by Shortcut. Students of design at the Berlin Weissensee School of Art have prototyped a new device that tracks gestures in an amputated limb and translates them to computer commands. (original image)
Image by Shortcut. An optical sensor, like the one on the underside of a mouse, is housed in a wristband that goes around a normal prosthetic. (original image)
Image by Shortcut. Like a mouse, it tracks movement in relation to a tabletop, and translates it to the cursor. (original image)
Image by Shortcut. A microcontroller housed in the bracelet runs code to translate particular movements into outputs, such as scroll, zoom, drag and drop, and more, and then the device communicates that to a computer via Bluetooth. (original image)
Image by Shortcut. Myoelectric sensors, mounted on the residual limb, track the small voltages that travel down the remaining nerves. (original image)
The Shortcut consists of two parts. An optical sensor, like the one on the underside of a mouse, is housed in a wristband that goes around a normal prosthetic. Like a mouse, it tracks movement in relation to a tabletop, and translates it to the cursor. Myoelectric sensors, mounted on the residual limb, track the small voltages that travel down the remaining nerves. It’s a bit like how amputees can still feel the hand they do not have; after amputation, your brain can still send signals to clench, pinch, twist, and more. A microcontroller housed in the bracelet runs code to translate particular movements—touching a thumb to pointer finger, for example, or flexing a hand back—into outputs, such as scroll, zoom, drag and drop, and more, and then the device communicates that to a computer via Bluetooth.
There’s actually a whole category of myoelectric prosthetics already—it’s sort of the standard for higher-end electric prosthetics. The sensors work the same way, but they instead control the prosthesis itself, running electric motors to grasp the fingers or rotate the wrist. There are also other methods of computer interface, from voice activation and transcription (with programs like Siri and Dragon) to brainwave computer-control interfaces. These technologies are either designed for more specific scenarios, or still in the early stages.
“We’re not trying to rebuild what was there before, like make a bad replica of an organic hand,” says Rex. “Why not bypass the interface that was built for organic hands, and rather communicate to digital infrastructure directly?”
Kaltenbach, Rex and Mahal are still in the prototyping phase. A 3D-printed housing contains off-the-shelf components, all of which would have to be redesigned to fit into a much smaller bracelet. Currently, the students are participating in the DesignFarmBerlin accelerator, and working to refine the gesture catalog and make it smaller and more precise. Maier has shared the idea around Ottobock, and says there are lots of amputees excited to try it out. One day, such technology might appear in a traditional prosthesis, as one of its many functions.
Before 1975, the Smithsonian's central data processing organization was named the Information Systems Division (ISD), under the leadership of Nicholas J. Suszynski, Jr. From May 29, 1975 to April 3, 1982, it was called the Office of Computer Services (OCS). On April 4, 1982, it was renamed the Office of Information Resource Management (OIRM). Stanley A. Kovy (January 4, 1931 - April 15, 1998) was the only Director of OCS. The central data processing organization had several others names at different time in its history.
The Smithsonian operated several different mainframe computers, manufactured by both Honeywell and IBM. The mainframe computers were all located on the third floor of the SW quadrant of the Arts & Industries Building, in the Computer Center room 3335. This space was used from 1967 to c. 2006, when operations were moved to the Herndon Data Center.
Photo taken in the Computer Center, Arts & Industries Building, on November 5, 1971 at the installation and dedication of the "new" Honeywell 2015 mainframe computer. In Sept-Nov. 1967, the Smithsonian ordered and installed a Honeywell 1200; the 2015 was a replacement and an upgrade to the 1200. Both mainframe computers were members of the H-200 family or model line. At least seven Honeywell employees are represented in these images, many wearing Honeywell name tags.
Smithsonian staff gathered around the "new" Honeywell 2015 mainframe computer following it's installation and dedication. A banner has been hung reading "Mr. Bradley . . . I'm Yours!". From left to right are: Keith Laverty, Accountant; unknown; Robert A. Brooks, Deputy Under Secretary; Richard Ault, Director of Support Activities; unknown; James C. Bradley, Under Secretary; John Beach, ISD; George M. Seminara (mostly hidden), ISD; and unknown.
The Apple II used a MOS 6502 chip for its central processing unit. It came with 4 KB RAM, but could be extended up to 48 KB RAM. It included a BASIC interpreter and could support graphics and a color monitor. External storage was originally on cassette tape, but later Apple introduced an external floppy disk drive. Among the Apple II's most important features were its 8 expansion slots on the motherboard. These allowed hobbyists to add additional cards made by Apple and many other vendors who quickly sprung up. The boards included floppy disk controllers, SCSI cards, video cards, and CP/M or PASCAL emulator cards.
In 1979 Software Arts introduced the first computer spreadsheet, Visicalc for the Apple II. This "killer application" was extremely popular and fostered extensive sales of the Apple II.
The Apple II went through several improvements and upgrades. By 1984, when the Macintosh appeared, over 2 million Apple II computers had been sold.
Since the Smithsonian American Art Museum acquired the Nam June Paik archive in 2009, the museum's researchers have delighted in cataloging the whimsical and diverse materials accumulated by the playful father of video art: reams of papers plus a cornucopia of objects: TV sets, birdcages, toys and robots.
Two of the more amazing finds—a silent new opera written in computer code from 1967 and a previously unknown Paik TV Clock—will make their first public appearance in "Watch This! Revelations in Media Art," an exhibition that opens on April 24.
Michael Mansfield, curator of film and media arts at the museum, says that former-Smithsonian post-doctoral fellow Gregory Zinman (currently a professor at Georgia Tech), found the truly history-making original computer opera that was created in 1967 at the Bell Telephone Laboratories, then the research unit for AT&T’s Bell System in Murray Hill, New Jersey. “Bells went off when Greg saw a sheet of Fortran code and realized it was done at Bell Labs,” Mansfield says. “There were a very limited number of artworks that came out of Bell Labs.”
Titled Etude 1, the unfinished work includes a piece of fax paper with an image on it and an accordion-folded, pencil-annotated printout of Fortran code dated Oct. 24, 1967.
Nam June Paik (1932-2006), the Korean-born composer, performance artist, painter, pianist and writer is the acknowledged grandfather of video art. A seminal figure in the avant-garde in Europe and America in the 1960s, 1970s and 1980s, Paik transformed video into a medium for art—manipulating it, experimenting with it, playing with it—thereby inspiring generations of future video artists. Paik has already been the subject of museum retrospectives at the Whitney (1982), the Guggenheim (2000) and the Smithsonian (2013), but the discovery of his computer opera charts new territory in the intersection of art and technology.Nam June Paik (1932-2006) (Christopher Felver/CORBIS)
Paik’s intent was clear.
“It is my ambition to compose the first computer-opera in music history,” Paik wrote to the director of arts programming at Rockefeller University, seeking a grant, in the mid-1960s. He even mentions a GE-600, a “mammoth” room-size, new computer, at Bell Labs.
But how did Paik get to Bell Labs, the most top-secret, innovative scientific organization in the world at that time? Bell Labs are not known for art, but for innovations in transistors, lasers, solar cells, digital computers, fiber optics, cellular telephony and countless other fields (its scientists have won seven Nobel Prizes). That is a tale it has taken some time to unravel.
In the 1960s Bell’s senior management briefly opened the labs to a few artists, inviting them to use the computer facilities. Jon Gertner touches on this in his excellent book, The Idea Factory: Bell Labs and the Great Age of American Innovation (Penguin Books, 2012), but he doesn’t focus on the artists, including the 1960s animator Stan VanDerBeek, Jean Tinguely, the musician Leopold Stokowski—and Paik.
“The engineers turned to artists to see if the artists would understand the technology in new ways that the engineers could learn from,” Zinman explains. “To me, that moment, that confluence of art and engineering, was the genesis of the contemporary media-scape.”
Etude 1 is the needle in the haystack of the Smithsonian’s Paik archive, a 2009 donation of seven truck loads of material donated by Ken Hakuta, Paik’s nephew and executor. It includes 55 linear feet of papers, videotapes, television sets, toys, robots, birdcages, musical instruments, sculptures, robots and one opera.
Etude 1 is one of three works that Paik created at Bell Labs and that are held in the museum's collections, Mansfield explains. Digital Experiment at Bell Labs is a short silent film that records what was happening on the screen of the cathode ray tube for four minutes as Paik ran his program through the computer. It is a series of rotating numbers and flashing white dots.
Confused Rain is a tiny snippet of film negative. Looking a bit like concrete poetry, the image is of seemingly random appearances of individual black letters of the word “confuse” falling like drops of rain against a plain white background.
Etude 1 is a piece of Thermo fax paper with an image that looks like a four-leaf clover, with four overlapping circles. Each circle has concentric inner circles composed of individual letters of the alphabet. The circle to the left is formed from the letters of the word “God.” The circle to the right, from the word “Dog.” The circle on top, from “Love,” the circle on the bottom, from “Hate.”
What does all this mean?
“It is completely open to interpretation,” Mansfield says. “I’m fascinated that Paik was using letters from the English alphabet to compose a visual work of art. He was aiming to put some human-ness into the machine. He was focused on the human use of technology. I think it corresponded to his need for a poetic alternative to the language of programming.”
Why “God, Dog, Love, Hate”?
“These are basic words with big concepts,” Mansfield says.An accordion-folded, pencil-annotated printout of Fortran code dated Oct. 24, 1967, from Etude 11967- 1968. (Nam June Paik Archive; Gift of the Nam June Paik Estate,© Nam June Paik Estate, Smithsonian American Art Museum)
“I think it has to do with opposites, Paik’s play on words,” Zinman adds. “My guess is that he found that amusing. It also could be that short terms could be plotted more easily.”
The same words appear on the printout of Fortran code dated Oct. 24, 1967. An accompanying Bell Labs punch card, which allowed the computer to run the program, carries the name of a Bell Labs programmer, A. Michael Noll, the pioneer in algorithmic art and computer-animated film who monitored Paik’s visits.
As Noll, now professor emeritus of Communications at the Annenberg School for Communication and Journalism at the University of Southern California, recalls, “I was surprised when printouts with Paik’s name along with mine were discovered in the Smithsonian archive, though Paik’s visit to Bell Labs was the result of my visit, along with Max Mathews of Bell Labs, to Paik’s studio on Canal Street in New York.”
Mathews, who rose to become the head of the Bell Labs acoustic and behavioral research unit, was working on computer-generated music at the time and so knew of Paik, who had moved to New York from Germany in 1964 and was already an emerging performance artist.
“Mathews invited Paik to visit the lab and assigned him to me, but now, almost 50 years later, I do not recall much about what he might have done,” Noll says. “I gave him a short introduction to the Fortran programming language. He most likely then went off on his own, writing some programs to control the microfilm plotter to create images. The challenge back then was that programming required thinking in terms of algorithms and structure. Paik was more used to handwork.” He never saw what Paik did.
Still, Paik must have been excited about the new technology. Although it is not yet known how he physically got from the city to the labs in the New Jersey countryside, he visited every three or four days in the fall of 1967. Then, he started going less frequently.
“He was frustrated because it was just too slow and not intuitive enough,” Zinman says. “Paik moved very fast. He once said his fingers worked faster than any computer. He thought the computer would revolutionize media—and he was right—but he didn’t like it.”
Then he stopped going entirely.
“It put a real financial strain on him,” Mansfield says. “Paik was a working artist, selling works of art to live, and he was also purchasing his own technology. He was becoming distracted by his electronic artworks.”
Nonetheless, Paik’s work at Bell Labs was important.
“His idea was to take things apart,” Zinman says. “He was playful, interested in disrupting patterns. He wanted to rethink how media worked, just as he wanted TV to be a two-way communicative device, going back and forth. He was modeling a way for people to take control of the media, instead of being passive.”
Adds Noll: “Bell Telephone Laboratories was a tremendous place to allow such artists access. I am working on documentation of the battle between Bell Labs management and one individual at AT&T who objected to work in computer art and other areas that this one person deemed ‘ancillary.’ In the end, the most senior management—William O. Baker—decided to ignore AT&T and follow the challenge of A.G. Bell to ‘Leave the beaten track occasionally and dive into the woods.’”
Paik has never been more popular. There was recently a show of his work at the James Cohan gallery in New York; he was the subject of an entire booth at the recent Art Fair in New York and also appeared in a stand at the European Fine Art Fair this year in Maastricht, the Netherlands. His works are selling—and for hundreds of thousands of dollars apiece. It seems another generation is rediscovering the father of video art—and embracing him wholeheartedly.
Etude 1 along with the recently recovered TV Clock will debut in the exhibition Watch This! Revelations in Media Art, which opens at the Smithsonian American Art Museum April 24 and runs through September 7, 2015. The show includes works by Cory Arcangel, Hans Breder, Takeshi Murata, Bruce Nauman and Bill Viola, among dozens of others, and will include 16 mm films, computer-driven cinema, closed-circuit installations, digital animation and video games. Learn more about the museum's discovery of the art work on Eye Level, in the article "Computers and Art" by curator Michael Mansfield.
Artificial intelligence is getting pretty good at besting humans in things like chess and Go and dominating at trivia. Now, AI is moving into the arts, aping van Gogh’s style and creating a truly trippy art form called Inceptionism. A new AI project is continuing to push the envelope with an algorithm that only produces original styles of art, and Chris Baraniuk at New Scientist reports that the product gets equal or higher ratings than human-generated artwork.
Researchers from Rutgers University, the College of Charleston and Facebook’s AI Lab collaborated on the system, which is a type of generative adversarial network or GAN, which uses two independent neural networks to critique each other. In this case, one of the systems is a generator network, which creates pieces of art. The other network is the “discriminator” network, which is trained on 81,500 images from the WikiArt database, spanning centuries of painting. The algorithm learned how to tell the difference between a piece of art versus a photograph or diagram, and it also learned how to identify different styles of art, for instance impressionism versus pop art.
The MIT Technology Review reports that the first network created random images, then received analysis from the discriminator network. Over time, it learned to reproduce different art styles from history. But the researchers wanted to see if the system could do more than just mimic humans, so they asked the generator to produce images that would be recognized as art, but did not fit any particular school of art. In other words, they asked it to do what human artists do—use the past as a foundation, but interpret that to create its own style.
At the same time, researchers didn’t want the AI to just create something random. They worked to train the AI to find the sweet spot between low-arousal images (read: boring) and high-arousal images (read: too busy, ugly or jarring). “You want to have something really creative and striking – but at the same time not go too far and make something that isn’t aesthetically pleasing,” Rutgers computer science professor and project lead, Ahmed Elgammal, tells Baraniuk. The research appears on arXiv.
The team wanted to find out how convincing its AI artist was, so they displayed some of the AI artwork on the crowd-sourcing site Mechanical Turk along with historical Abstract Expressionism and images from Art Basel's 2016 show in Basel, Switzerland, reports MIT Technology Review.
The researchers had users rate the art, asking how much they liked it, how novel it was, and whether they believed it was made by a human or a machine. It turns out, the AI art rated higher in aesthetics than than the art from Basel, and found "more inspiring." The viewers also had difficulty telling the difference between the computer-generated art and the Basel offerings, though they were able to differentiate between the historical Abstract Expressionism and the AI work. “We leave open how to interpret the human subjects’ responses that ranked the CAN [Creative Adversarial Network] art better than the Art Basel samples in different aspects,” the researchers write in the study.
As such networks improve, the definition of art and creativity will also change. MIT Technology Review asks, for instance, whether the project is simply an algorithm that has learned to exploit human emotions and not truly creative.
One thing is certain: it will never cut off an ear for love.
Response by Jesse Chun to participation invitation for a 2019 "What is Feminist Art?" exhibition at the Archives of American Art. The response consists. The questionnaire response consists of a printed computer screenshot of the Apple Notes application.