reBlog

Current Reblogger: Chloë Bass

Chloë Bass is an artist, curator and community organizer based in Brooklyn. She is the co-lead organizer for Arts in Bushwick (artsinbushwick.org), which produces the ever-sprawling Bushwick Open Studios, BETA Spaces, and performance festival SITE Fest, which she founded. Recent artistic work has been seen at SCOPE Art Fair, CultureFix, the Bushwick Starr Theater, Figment, and The Last Supper Art Festival, as well as in and around the public spaces of New York City. She has guest lectured at Parsons, the Polytechnic University of Puerto Rico, and Brooklyn College. Other moments have found her co-cheffing Umami: People + Food, a 90 person private supper club; growing plants with Boswyck Farms (boswyckfarms.org); and curating with architecture gallery SUPERFRONT (superfront.org). Chloë holds a BA in Theater Studies from Yale University, and an MFA in Performance and Interactive Media Arts (PIMA) from Brooklyn College.

http://chloebass.wordpress.com


Shared by reBlog @ Eyebeam

The first edition of Botacon took place last weekend in Brooklyn. The lineup of speakers was impressive and made for one of the best conferences I’ve ever attended. But one materials-related presentation stood out. Mr. Kim and John Sarik gave a talk titled “MakerBot Printable Transistors and OLEDs or I want to be Jeri Ellsworth when I grow up.” In it the Columbia Laboratory for Unconventional Electronics researchers described using a wood RepRap Mendel, equipped with a MakerBot Unicorn style pen plotter and a micron pen, to print semiconductors!

Today it’s possible to print organic field transistors (OFETs), organic light emitting diodes (OLEDs), and other devices using sophisticated laboratory equipment. But why should academics have all the fun? The goal of this project is to design a fabrication process that allows MakerBot owners to print their own electronics using (ideally) inexpensive and easy-to-source materials. In the first phase of the project we are using a RepRap, plotter pens, and research grade materials to create devices. The second phase of the project will focus on exploring new device materials. This is an ongoing project and we are looking for collaborators.

photo credits: Mr Kim and John Sarik

photo credits: Mr Kim and John Sarik

Mr. Kim and Sarik experimented with a variety of conductive materials (silver ink, P3HT, CP1 resin), which they inserted into rapidograph and pigma micron pens. According to the researchers, this is a nine step process:

photo credits: Mr Kin and John Sarik

photo credits: Mr Kin and John Sarik

photo credits: Mr Kin and John Sarik

photo credits: Mr Kin and John Sarik

photo credits: Mr Kin and John Sarik

photo credits: Mr Kim and John Sarik

photo credits: Mr Kim and John Sarik

photo credits: Mr Kim and John Sarik

photo credits: Mr Kim and John Sarik

photo credits: Mr Kim and John Sarik

The project doesn’t yet have a website but, in the DIY spirit of this research, Mr. Kim uploaded the field effect transistor patterns to Thingiverse and made the talk’s slides publicly available at mrkimrobotics.com.

All photos provided by John Sarik and Mr. Kim. John: thank you so much for discussing this fascinating research with me and for sending us the presentation materials.

 
Shared by reBlog @ Eyebeam

Paper, wood, and traditional media aren’t tied to one vendor. They don’t require licenses or agreements. They aren’t, generally speaking, incompatible. If digital art is going to provide artists with the same freedom, it stands to reason that artists working with computation will find ways to make any pixel their medium.

Processing is a good example. It takes some time, but eventually, the understanding dawns upon you: Processing is more a design for how to code, an API, than it is a specific platform. Taken further, heck, it’s more like a way of life – sketch on paper, write simple code, prototype fast, make something happen.

I gave a talk this weekend at the Mobile Art Conference. The gathering was focused on using mobile devices, primarily Apple mobiles, as canvases. iOS was certainly the focus; the room was a sea of iPhones and iPads. But none of these gorgeous painting apps would be possible without code, and so I was happy to get the chance to talk about the code that drives this art.

Love or hate Apple’s devices, the world isn’t entirely populated by them, which means artists will likely want ways of working across other gadgets and screens. I looked at how easy it was to use the Processing environment to “sketch” with code directly on a mobile device. For sound, I covered the libpd library and loading Pure Data patches on iOS devices and iPhones. Because of the bent of the audience, I was talking largely to non-programmers, but I hope they got the idea – not least because I think these tools are wonderful ways to go from being a non-coder to coder. (That sure happened for me, and the people I teach.) You can see my slides above.

You’ll see that in one of the slides, I talk about the breakdown like so:

Pd: Desktop (Windows, Mac, Linux), Android, iOS

Processing: Desktop (Windows, Mac, Linux), Android, Browser

Now, there are some holes there, but even those are getting filled. Chris McCormick is working on a Web implementation of Pd, called WebPd. The usefulness of that is dependent on someone other than Mozilla implementing the Audio API, but it’s promising.

On the Processing side, I absolutely love this project, and need to look further into it: in the video below, you’ll see a mini “IDE” that allows you to tap in Processing code and render it, via Processing.js, in the iPad’s browser. Totally brilliant.

Of course, the same thing would be possible with tablets running Palm, Blackberry, and Android OSes. And that’s what freedom in software is about – not an abstract ideal, but an every day, today-you’re-going-to-get-work-done and platforms-won’t-be-a-pain-in-the-ass benefit. It is, after all, why we bought into this computer thing in the first place.

 
Shared by reBlog @ Eyebeam

Audiovisual technology has returned to spectacle. Artists are armed with new technologies for fusing space and image, sound and sight. What they tend not to have is permanent spaces. And that lack of venues has made audiovisual artists nomadic and provisional, constrained to hastily-provisioned, rectangular, sometimes dim projections. In short, for revolutions to happen, you do need special venues, not just special artists.

Next month, Montreal will get a space that promises to be just that. The Society for Arts and Technology / La société des rts technologiques (SAT) have built a space that’s also a kind of “instrument,” a domed enclosure on their top floor called the Satosphere. When they say they want it to be cross-disciplinary, they aren’t kidding: aside from obvious applications for visuals, sound, art, architecture, dance, and gaming, the space will include a “FoodLab” for collaboration with chefs.

The big draw of such a space is being able to provide an immersive experience, surrounding the visitor with imagery. (Iannis Xenakis, among others, would be proud.) The pictures here, courtesy SAT, I think tell a big part of the story. It’s superior Canadian research and engineering at work, ready to accommodate visualization and environmental interaction across a wide variety of disciplines.

A space alone doesn’t guarantee great work, but it’s quite a step. To fill the space starting when it opens in spring 2011, SAT have enlisted a variety of artists, including work with a scale model called the Labodome (seen at bottom). (Actually, I think working in miniature can generally be a fantastic way to approach this.)

SAT’s Satosphere (hopefully) won’t stand entirely alone, but it seems to me it’ll be a critical space. See more notes from last month in an editorial by SAT Founding President and Artistic Director Monique Savoie.

Autumn Editorial [SAT]

Also worth noting: among SAT’s research project is something they’re calling the SPIN Framework.

http://spinframework.org/

It’s an open-source set of frameworks for manipulating spacial interaction, all using OSC (OpenSoundControl) for communication with tools like Pd and Processing. It’s based on the OpenSceneGraph, a native framework for scenegraph manipulation. It looks rather elegant, solving a number of basic problems for distributing virtual environments, whether across projectors, collaborators, spaces, or media. There’s a related project, also using OSG, which deals with audio and Pd, called Audioscape. Now, I have to reject the giant slash marks through the lovely pedals and Moogerfooger in the illustration on that site – surely this need not be an either/or proposition – but otherwise, looks really intriguing.

I’d be curious if there might be a wider effort to bring together these kinds of ideas as they emerge. See also UC Santa Barbara’s MAT program and their Allosphere (which actually faces more significant projection challenges, being a 360-degree sphere), as well as the audiovisual framework LuaAV. (LuaAV is quite a lot more involved than SPIN, above, more akin to Pd or Processing, but since everything here supports OSC, they could all easily communicate.)

If you’ve got ideas or research of your own, I’d love to hear them.

Working with a scale model of the big dome – looking a bit here like a spacecraft.

 
Shared by reBlog @ Eyebeam

After many hours sorting thru C macros in the Pd code, working with all sorts of people contributing bits of code and experience here and there, Peter Brinkmann finally was the closer on the Pd/Android port and in the process created a nice, embeddable libpd. Here’s an alphabetical credits list of anyone who helped along the way:

Chris McCormick, Dominik Hierner, Hans-Christoph Steiner, Martin Roth, Miller Puckette, Naim Falandino, Peter Brinkmann, Peter Kirn, Scott Fitzgerald, and others

And the announcement:

libpd has reached a 0.1 release, enabling developers to use Pd as a sound engine in their applications. Out of the gate, we have extensive code samples for Android 1.5 and later, plus the basic tools to work on iOS (recent armv7 recommended for now, with other devices soon). In the near future, embedding Pd patches inside tools like Processing/Java, OpenFrameworks/C++, and Python should be just as easy. The library is based on Pd vanilla, so this is not a fork of Pd; you can use patches in it just as you would in any other version.

Developers will find the library, code snippets (for Android; iOS is coming), and even some handy abstractions:

http://gitorious.org/pdlib/

To learn more:

Article on the release at createdigitalmusic:

http://bit.ly/libpdishere

Group for discussing Pd on mobile, embedded, and using libpd:

http://noisepages.com/groups/pd-everywhere/

End users with Android phones or tablets can try out packages now:

http://gitorious.org/pdlib/pages/Packages

… in addition to patches from Chris detailed in the CDM post above.

libpd available is thanks to the work of Peter Brinkmann, with testing, further development, documentation, and other contributions from the RjDj team (who are now adopting it in their future development work), Hans-Christoph Steiner, Chris McCormick (who has also added the ability to make HTML5 web interfaces), and Peter Kirn, along with members of the NYC Patching Circle at NYC Resistor.

 
Tags: thinking
Shared by reBlog @ Eyebeam



The human brain contains many regions that are specialized for processing specific decisions and sensory inputs. Many of these are shared with our fellow mammals (and, in some cases, all vertebrates), suggesting that they are evolutionarily ancient specializations. But innovations like writing have only been around for a few thousand years, a time span that's too short relative to human generations to allow for this sort of large evolutionary change. In the absence of specialized capabilities, how has it become possible for such large portions of the population to become literate?

The authors of a paper that will be released by Science today suggest two possible alternatives to explain this widespread literacy. Either reading is similar enough to something that our brains could already do that it's processed by existing structures, or literacy has "stolen" areas of the brain that used to be involved in other functions. (A combination of the two is also possible.) In the new paper, they use functional MRI imaging of brain activity to figure out just what literacy does to the brain, and discover that literacy does take over some new areas of the brain, with mixed effects on other areas of cognition.

Read the rest of this article...

Read the comments on this post

 
Shared by reBlog @ Eyebeam

5viking64060_feb566fed2.jpg

A few days ago i was in Bergen in Norway. Where everything is so postcard pretty...

5ville769_f5a7f7ce98.jpg

5cerf19_d201eb2efe.jpg

that fast food joints are afraid to stand out from their surrounding.

5mcdo65002_551b67f214.jpg

I was in Bergen to attend Piksel, the 8th festival for Electronic Art and Technological Freedom. If i could participate to events as exciting as this one more often i bet i wouldn't sound so relentlessly blasée. I'll talk later about the exhibition, workshops and live events but i'll kick off this series of report with one of the most interesting presentations i heard at the festival.

In his talk, titled Golden Shield Music - Sonification of IT censorship technologies, Marco Donnarumma spent some time refreshing our memories about internet censorship before presenting a sound project which was directly inspired by it.

The name of his work, Golden Shield Music, refers unequivocally to China's Golden Shield Project aka the 'Great Firewall of China'. This censorship and surveillance project operated by the country's Ministry of Public Security involves the massive use of web technologies such as IP blocking, DNS filtering and redirection, URL filtering, packet filtering to censor specific contents through web search engines such as Google, Yahoo and Msn. Donnarumma noted that it's easy to point the finger at China but internet censorship is fairly widespread elsewhere in the world. Australia has gained fame for its attempts at censoring online content. The UK too. And even Italy. It's been quite infuriating to live there while Berlusconi's government was blocking The Pirate Bay as a 'preventive measure'. The artist informed us of a secret deal that Facebook would have signed with the Italian police, giving them access to the personal data of any user suspected of identity theft, phishing scams and possession of child pornography.

51marco6534_17fdd1c515.jpg

Donnarumma also pointed us to the work of activist Matti Nikki whose website lapsiporno.info, is monitoring the Finnish censorship program.

Another website the artist highlighted -albeit with much less enthusiasm- is the very fancy-looking Recorded Future. Funded by the CIA and Google, the project monitors tens of thousands of websites, blogs and Twitter accounts in real time to find the relationships between people, organizations, actions and incidents. The goal of their intense data mining is to 'predict the future' by looking at the 'invisible links' between documents that talk about the same, or related, entities and events.

With Golden Shield Music, Marco Donnarumma wanted to use the censorship technology in a creative way. Because it was difficult to find precise information about the targets of censorship, the artist had to go through dozens of papers to finally find a list of websites blacklisted by China.

The generative composition for eight audio channels was created as follows:

Golden Shield Music collects the twelve website's IP that are most screened by the Golden Shield.

IP numbers are listed in a text file which feeds an automated MIDI polyphonic synthesizer. The latter translates each IP in a single note formed by 4 voices with a specific velocity.

Resulting notes are ordered by the amount of pages the Golden Shield obscured for each IP address: the website's IP obtaining the highest page result on Google.com becomes the first note of the score and the others follow in decreasing order.

Data organizes the musical notation, establishing an abstract relationship between Internet information and musical algorithms which sounds harmonious and "handcrafted".

1marco-donnarumma_golden-shield-music2.jpg

Visit the project website to listen to a stereo version of the piece.

The objective was not to make good sound but to raise the awareness about the phenomenon of web censorship.

 
Shared by reBlog @ Eyebeam

"The growing number of digital billboards on U.S. roads and highways consume large amounts of energy and are creating a wide variety of electronic waste, according to a new report (pdf). The new study says the typical digital billboard consumes about 30 times as much energy as the average American household."

Energy use led billboards"The digital billboards use more efficient LED (Light Emitting Diode) lighting than traditional signs, but deploy so many of the LED bulbs on each billboard that energy use is high; traditional billboards use just one or two large bulbs to illuminate signs. In addition, digital billboards are illuminated day and night, and require cooling systems that use more energy."

Source: Yale Environment 360.

Previously: Viva Las Vegas - LEDs and the energy efficiency paradox.

 
Shared by reBlog @ Eyebeam
Grace grothaus cityscape detail3

The power consumption of our high-tech machines and devices is hugely underestimated.

When
we talk about energy consumption, all attention goes to the electricity
use of a device or a machine while in operation. A 30 watt laptop is
considered more energy efficient than a 300 watt refrigerator. This may
sound logical, but this kind of comparisons does not make much sense if
you don't also consider the energy that was required to manufacture the
devices you compare. This is especially true for high-tech products,
which are produced by means of extremely material- and energy-intensive
manufacturing processes. How much energy do our high-tech gadgets
really consume?

----------------------------------------------------------------------------------------------------------------------------------------------
Artwork: cityscape I & II by Grace Grothaus.
----------------------------------------------------------------------------------------------------------------------------------------------

The
energy consumption of electronic devices is skyrocketing, as was
recently reported by the International Energy Association ("Gadgets and gigawatts").
According to the research paper, the electricity consumption of
computers, cell phones, flat screen TV's, iPods and other gadgets will
double by 2022 and triple by 2030. This comes down to the need for an
additional 280 gigawatts of power generation capacity. An earlier
report from the British Energy Saving Trust (The ampere strikes back - pdf) came to similar conclusions.

There are multiple reasons for the growing energy consumption of electronic equipment; more and more
people can buy gadgets, more and more gadgets appear, and existing
gadgets use more and more energy (in spite of more energy efficient
technology - the energy efficiency paradox described here before). 

The 180 watt laptop

While
these reports are in themselves reason for concern, they hugely
underestimate the energy use of electronic equipment. To start with,
electricity consumption does not equal energy consumption. In the US,
utility stations have an average efficiency of about 35 percent. If a
laptop is said to consume 60 watt-hours of electricity, it consumes
almost three times as much
energy (around 180 watt-hour, or 648 kilojoules).

So,
let's
start by multiplying all figures by 3 and
we get a more realistic image of the energy consumption of our
electronic equipment. Another thing that is too easily forgotten, is
the energy use of the infrastructure that supports many technologies;
most notably the mobile phone network and the internet (which consists of server farms, routers, switches, optical equipment and the like).

Embodied energy

Most
important, however, is the energy required to manufacture all this
electronic equipment (both network and, especially, consumer appliances). The energy
used to produce electronic gadgets is
considerably higher than the energy used during their operation. For most
of the 20th century, this was different; manufacturing methods were not
so energy-intensive.

An old-fashioned car uses many times more energy
during its lifetime (burning gasoline) than during its manufacture. The
same goes for a refrigerator or the typical incandescent light bulb:
the energy required to manufacture the product pales into
insignificance when compared to the energy used during its operation.

Circuit board 4

Advanced digital technology has turned this relationship
upside down. A handful of microchips can have as much embodied energy
as a car. And since digital technology has brought about a plethora of
new products, and has also infiltrated almost all existing products,
this change has vast consequences. Present-day cars and since long
existing analogue devices are now full of microprocessors.
Semiconductors (which form the energy-intensive basis of microchips)
have also found their applications in ecotech products like solar
panels and LEDs.

Where are the figures?

While it is fairly easy to obtain figures regarding the energy
consumption of
electronic devices during the use phase (you can even measure it yourself using a
power meter), it is surprisingly hard to obtain reliable and up-to-date figures on the
energy consumed during the production phase. Especially when it
concerns fast-evolving technologies. A life
cycle analysis of high-tech products is extremely complex and can take many
years, due to the
large amount of parts, materials and processing techniques involved. In
the meantime, products and processing technologies keep evolving, with the result that most
life cycle analyses are simply outdated when they are
published.

----------------------------------------------------------------------------------------------------------------------------------------------

The embodied energy of the memory chip alone already exceeds the
energy consumption of a laptop during its life expectancy of 3 years

----------------------------------------------------------------------------------------------------------------------------------------------

For more recent and emerging technologies, life cycle analyses simply
do not exist. Try looking for a research paper that calculates the
embodied energy of
a Light Emitting Diode (LED), a lithium-ion battery or any device full
of electronics
meant to save energy: you won't find it (and if you do, please let me
know).

Embodied energy of a computer

The most up-to-date life cycle analysis of a computer
dates from 2004 and concerns a machine from 1990. It concluded that
while the ratio of fossil fuel use to product weight is 2 to 1 for most
manufactured products (you need 2 kilograms of
fuel for 1 kilogram of product), the ratio is 12 to 1 for a computer
(you
need 12 kilograms of fuel for 1 kilogram of computer). Considering an
average life expectancy of 3 years, this means that the total energy
use
of a computer is dominated by production (83% or 7,329 megajoule) as
opposed
to operation (17%). Similar figures were obtained for mobile phones.

Circuit board 7

While
the 1990 computer was a desktop machine with a CRT-monitor, many of
today's computers are laptops with an LCD-screen. At first sight, this
seems to indicate that the embodied energy of today's machines is lower
than that of the 1990 machine, because much less material (plastics,
metals, glass) is needed. But it is not the plastic, the metal and the glass that makes computers so
energy-instensive to produce. It's the tiny microchips, and present-day
computers have more of them, not less.

100 years of manufacturing

The energy needed to manufacture microchips is disproportional to their size. MIT-researcher Timothy Gutowski compared the material and energy intensity of
conventional manufacturing techniques with those used in semiconductor and in nanomaterial production (a
technology that is being developed for use in all kinds of products
including electronics, solar panels, batteries and LEDs).

----------------------------------------------------------------------------------------------------------------------------------------------

Digital technology is a product of cheap energy

----------------------------------------------------------------------------------------------------------------------------------------------

As an example of more conventional manufacturing methods,
Gutowski
calculated the energy requirements of machining, injection molding and
casting. All these techniques are still used intensively today, but
they were developed almost 100 years ago. Injection molding is used for
the manufacture of plastic
components, casting is used for the manufacture of metal components,
and machining is a material removing process that involves the cutting
of metals (used both for creating and finishing products).

6 orders of magnitude

While there
are significant differences between configurations, all these
manufacturing methods require between 1 and 10 megajoule of electricity
per kilogram of material. This corresponds
to 278 to 2,780
watt-hour of electricity per kilogram of material. Manufacturing a one
kilogram plastic or metal part thus requires as much electricity as
operating a flat screen television for 1 to 10 hours (if we assume that the part only undergoes one manufacturing operation).

The
energy requirements of semiconductor and nanomaterial manufacturing
techniques are much higher than that: up to 6 orders of
magnitude (that's 10 raised to the 6th power) above those of
conventional manufacturing processes (see figure below, source, supporting information). This comes down to between 1,000
and 100,000
megajoules per kilogram of material, compared to 1 to 10 megajoules for
conventional manufacturing techniques.

Gutowski energy use manufacturing processes

Manufacturing
one kilogram of
electronics or nanomaterials thus requires between 280 kilowatt-hours and 28
megawatt-hours of electricity; enough to power a flat screen television
continuously for 41 days to 114 years. These data do not include
facility air handling and environmental conditioning, which for
semiconductors can be substantial.

Embodied energy of a microchip

The energy consumption
of semiconductor manufacturing techniques corresponds
with a life cycle analysis of a "typical" 2 gram microchip
performed in 2002. Again, this concerns a 32 MB RAM memory chip - not
really cutting edge technology today. But the results are
nevertheless significant: to produce the 2 gram microchip, 1.6
kilograms of fuel were needed. That means you need 800 kilograms of
fuel to produce one kilogram of microchips, compared to 12 kilograms of
fuel to produce one kilogram of computer.

If
we take the energy density of crude oil (45 MJ/kg), this comes down to
72 megajoules (or 20,000 watt-hour) to produce a 2 gram microchip. Converted to a one kilogram microchip this comes down
to 3.3 megawatt-hours of electricity (or 36,000 MJ), well within the range of
the 280 kilowatt-hours (1,000MJ) and 28 megawatt-hours (100,000 MJ) calculated
above.

Also, the International Technology Roadmap for Semiconductors 2007 edition
gives a figure of 1.9 kilowatt-hours per square centimetre of microchip, so 20 kilowatt-hours
per 2 gram, square centimetre computerchip seems to be a reasonable
estimate.

How many microchips in a computer?

A
gadget or a computer does not contain one kilogram of semiconductors -
far from that. But, we don't need a kilogram of microchips to ensure
that the manufacturing phase will largely outweigh the usage phase. The
embodied energy of the memory chip alone already exceeds the
energy consumption of a laptop during its life expectancy of 3 years.

Today's personal computers have a RAM-memory of 0.5 to 2 gigabyte modules that
typically consist of 18 to 36 two-gram-microchips (as the ones
described above). This equates to 1,296 to 2,595 megajoules of embodied energy for the
computer memory alone, or 360,000 to 720,000 watt-hour. Enough to power
a 30 watt laptop non-stop for 500 to 1,000 days.

Memory(640,480) 4volt

Microprocessors
(the "brains" of all digital devices) are more advanced than memory
chips and thus contain at least as much embodied energy. Unfortunately,
no life cycle analysis of a microprocessor has been published. Certain
is that modern computers contain ever more of them.

One trend in recent years is the introduction of "multicore
processors" and "multi-CPU systems". Personal computers  can now contain 2, 3
or 4 microprocessors. Servers, game consoles and embedded systems can
have many more. Each of these "cores" is capable of handling its own
task independently of the others. This makes it possible to run several
CPU-intensive processes (like running a virus scan, searching folders
or burning a DVD) all at the same time, without a hitch. But with every extra chip (or chip surface) comes more embodied energy.

----------------------------------------------------------------------------------------------------------------------------------------------

The energy savings realised by digital technology will merely absorb its own growing footprint.

----------------------------------------------------------------------------------------------------------------------------------------------

Another trend is the rise of the "Graphics Processing Unit" or GPU.
This is a specialised
processor that offloads 3D graphics rendering from the microprocessor.
The GPU is indispensable to play modern
videogames, but it is also needed because of the ever higher
graphical requirements of operating systems. GPU's do not only raise
the energy consumption of a
computer while in use (GPU's can consume more
energy than current CPU's), but they also stand for more embodied
energy. A GPU is very memory-intensive and thus also increases the need
for more RAM-chips.

Nanomaterials

Why are microchips so
energy-intensive to manufacture? One of the reasons becomes clear when
you literally zoom in on the technology. A microchip is small, but the
amount of detail is fabulous. A microprocessor the size of a fingernail
can now contain up to two billion transistors - each transistor less
than 0.00007 millimetres wide. Magnify this circuit and it becomes a
structure as complex as a sprawling metropolitan city.

Circuit board 1

The
amount of materials embedded in the product might be small, but it
takes a lot of processing (and thus machine energy use) to lay down a
complex and detailed circuit like that. While the electricity
requirements of machines
used for semiconductor manufacturing are similar to those used for
older processes like injection molding, the difference lies in the
process rate: an injection molding machine can process up to 100
kilograms of
material per hour, while semiconductor manufacturing machines only
process
materials in the order of grams or milligrams.

Another reason
why digital technology is so energy-intensive to manufacture is the
need for extremely effective air filters and air circulation systems
(which is not included in the figures above). When you build
infinitesimal structures like that, a speck of dust would destroy the
circuit. For the same reason, the manufacture of microchips requires
the purest silicon (Electronic Grade Silicon or EGS, provided by the
energy-intensive CVD-process).

----------------------------------------------------------------------------------------------------------------------------------------------

The manufacture of nanotubes is as energy-intensive
as the manufacture of microchips.

----------------------------------------------------------------------------------------------------------------------------------------------

Every 18 months the amount of transistors on a microchip doubles
(Moore's law). On one hand, this means that less silicon is needed for
a certain amount of processing power or memory. On the other hand, when
transistors become smaller, you need even more effective air filtration
and purer silicon. Since the structure also becomes more complex, you
need more processing steps.

Circuit board 2

Nanotechnology operates on an even smaller scale than
micro-electronics, but its energy requirements are comparable. Carbon
nanofiber production, which is based on many of
the same techniques used by semiconductor manufacturing, requires 760
to 3,000 MJ of electricity per kilogram of material, while carbon nanotubes and single-walled
nanotubes (SWNTs) manufacturing requires a hefty 20,000 to 50,000 MJ
per kilogram. The manufacture of nanotubes is thus as energy-intensive
as the manufacture of microchips (36,000 MJ). Many of the large-scale applications proposed for nanotubes will simply not be possible because of energy requirements.

Recycling is no solution

Encouraging recycling is
often proposed as a way to lower the embodied energy of products.
Unfortunately, this does not work for micro-electronics (or nanomaterials). In
the case of conventional manufacturing methods, the energy requirements
of the manufacturing process (1 to 10 MJ per kilogram) are small
compared to the energy required to
produce the materials themselves.

For instance, producing 1 kilogram of
plastic out of crude oil requires 62 to 108 MJ of energy, while a
typical mix of virgin and recycled
aluminum requires 219 MJ. To make a fair comparison, you have
to
multiply the energy requirement of the manufacturing process by three
(1 megajoule of electricity requires 3 megajoules of energy) but even
then (with 3 to
30 MJ/kg) conventional manufacturing
processes appear to be quite benign compared to materials extraction
and primary processing (in the order of 100 MJ/kg - see table).

----------------------------------------------------------------------------------------------------------------------------------------------

Recycling is not a solution if all your energy use is concentrated in the
manufacturing process itself.

----------------------------------------------------------------------------------------------------------------------------------------------

In
the case of semiconductor manufacturing, this relation is reversed.
While it takes
230 to 235 MJ of energy to produce 1 kilogram of silicon (already quite
high compared to many other materials), chemical vapour deposition
(an important step in the semiconductor manufacturing process) requires
about 1,000 MJ of electricity and thus 3,000 MJ of energy per
kilogram.

That is 10 times more than the energy consumption of material
extraction and primary processing. In the case of conventional
manufacturing techniques, the use of
recycled material is an effective way to lower overall energy use
during manufacture. In the case of semiconductors, it is not. Recycling is not a solution
for energy consumption if all your energy use is concentrated in the
process itself.

Circuit board 3

This does not mean that the manufacture of microchips does not
require materials. In fact, producing microchips and nanomaterials is
also more material
intensive than the manufacture of conventional products, by the same
orders of magnitude. However, this concerns auxiliary materials which
are not incorporated into the product.

For example, the embodied energy of the
input cleaning gases in the CVD process (not included in the figures above) is more than 4 orders of magnitude greater
than that of the product output. Furthermore, these gases have to be
treated to reduce their reactivity and possible attendant pollution.
Gutowski writes: "If this is done using point of use combustion with methane,
the embodied energy of the methane alone can exceed the electricity
input."

The benefits of digital technology

Microchips also have positive effects on
the environment, by making other activities and processes more
efficient. This is the subject of a publication by the Climate Group,
an initiative of more than 50 of the world's largest companies. The report ("Smart 2020 - enabling the low carbon economy in the information age")
confirms the findings of other studies regarding the electricity use of
electronic equipment, but also calculates the benefits.

According
to Smart 2020, the emissions from Information and Communications
Technology (including the energy use of data centres, which the IEA
report does not include) will rise from 0.5 Gt CO2-equivalents in 2002
to 1.4 GtCO2-equivalents in 2020, assuming that the sector will
continue to make the "impressive advances in energy efficiency that it
has done previously". By enabling energy efficiencies in other sectors,
however, ICT could deliver carbon savings 5 times larger: 7.8 Gt
CO2-equivalents in 2020.

----------------------------------------------------------------------------------------------------------------------------------------------

Addressing technological obsolescence would be the most powerful
approach to lower the ecological footprint of digital technology

----------------------------------------------------------------------------------------------------------------------------------------------

These benefits are smart grids (2.03
Gt), smart buildings (1.86 Gt), smart motor systems (970 Mt),
dematerialisation and substitution (by replacing high carbon physical
products and activities such as books and meetings - with virtual low
carbon equivalents such as electronic commerce, electronic government,
videoconferencing, 500 Mt) and smart logistics (225 Mt). One of the
first tasks of ICT will be to monitor energy consumption and emissions
across the economy in real time, providing the data needed to optimise
for energy efficiency.

The report concludes: "The scale of
emission reductions that could be enabled by the smart integration of
ICT into new ways of operating living, working, learning and
travelling, makes the sector a key player in the fight against climate
change, despite its own growing footprint."

Circuit board city scape

But even if we
assume that all these savings will materialise (the report acknowledges
that this will not be an easy task), this conclusion does not take into
account the energy needed to manufacture all this equipment. If we
assume the share of manufacture to be 80 percent of
total energy consumption by ICT (following the only life cycle analysis of a
computer we have), then the 1.4 Gt in 2020 in reality should be 7 Gt -
almost as much as the 7.8 Gt that will be saved by ICT. No
environmental benefit would appear and the energy savings realised by
digital technology would merely absorb its own growing footprint.

Digital technology is a product of cheap energy

The
research of Timothy Gutowski shows that the historical trend is toward
more and more energy intensive processes. At the same time, energy
resources are declining.

Gutowski writes:

"This
phenomenon has been enabled by stable and declining material and energy
prices over this period. The seemingly extravagant use of materials and
energy resources by
many newer manufacturing processes is alarming and needs to be
addressed alongside claims of improved sustainability from products
manufactured by these means."

Production techniques for
semiconductors and nanomaterials can and will become more efficient, by
lowering the energy requirements of the equipment or by raising the
operating process rate. For instance, the "International Technology Roadmap for Semiconductors" (ITRS), an initiative of the largest chip manufacturers worldwide, aims to lower energy consumption
(pdf) per square centimetre of microchip from 1.9 kWh today to 1.6 kWh in
2012, 1.35 kWh in 2015, 1.20 kWh in 2018 and 1.10 kWh in 2022.

Circuit board city scape2

But
as these figures show, improving efficiency has its limits. The gains
will become smaller over time, and improving efficiency alone will
never bridge the gap with conventional manufacturing techniques.
Power-hungry production methods are inherent to digital technology as
we know it.

The ITRS-report warns that:

"Limitations on sources of energy could potentially limit the
industry's ability to expand existing facilities or build new ones".

Gutowski writes:

"It should be pointed out that there is also a need for
completely rethinking each of these processes and exploring
alternative, and probably non-vapour-phase processes".

Technological obsolescence

The ecological footprint of digital technology described above is far
from complete. This article focuses exclusively on energy use and does not take
into account the toxicity of manufacturing processes and the use of
water resources, both of which are also
several orders of magnitude higher in the case of both semiconductors and nanomaterials. To give an idea: most water used in semiconductor manufacturing is
ultrapure water (UPW), which requires large additional quantities of
chemicals. For many of these issues, the industry recognizes that there are no solutions (see the same ITRS-report, pdf). There are also the problems of waste & war.

Last,
but not least: the energy-intensive nature of digital technology is not
due only to energy-intensive manufacturing processes. Equally as
important is the extremely short lifecycle of most gadgets. If digital
products would last a lifetime (or at least a decade), embodied energy
would not be such an issue. Most computers and other electronic devices
are replaced only after a couple of years, while they are still
perfectly workable devices. Addressing technological obsolescence would
be the most powerful approach to lower the ecological footprint of
digital technology.

© Kris De Decker (edited by Vincent Grosjean). Artwork by Grace Grothaus (the works are for sale). More information on manufacturing methods.

Comments (11).

----------------------------------------------------------------------------------------------------------------------------------------------

Related articles: 

Mobile phone screen

----------------------------------------------------------------------------------------------------------------------------------------------

 
Shared by reBlog @ Eyebeam



Speaking at the Chaos Computer Club (CCC) Congress in Berlin on Tuesday, a pair of researchers demonstrated a start-to-finish means of eavesdropping on encrypted GSM cellphone calls and text messages, using only four sub-$15 telephones as network “sniffers,” a laptop computer, and a variety of open source software.

While such capabilities have long been available to law enforcement with the resources to buy a powerful network-sniffing device for more than $50,000 (remember The Wire?), the pieced-together hack takes advantage of security flaws and shortcuts in the GSM network operators’ technology and operations to put the power within the reach of almost any motivated tech-savvy programmer.

Read the rest of this article...

Read the comments on this post

 
Shared by reBlog @ Eyebeam

Like Steve and a lot of other people in the tech policy world, I've been trying to understand the dispute between Level 3 and Comcast. The combination of technical complexity and commercial secrecy has made the controversy almost impenetrable for anyone outside of the companies themselves. And of course, those who are at the center of the action have a strong incentive to mislead the public in ways that makes their own side look better.

So building on Steve's excellent post, I'd like to tell two very different stories about the Level 3/Comcast dispute. One puts Level 3 in a favorable light and the other slants things more in Comcast's favor.

Story 1: Level 3 Abuses Its Customer Relationships

As Steve explained, a content delivery network (CDN) is a network of caching servers that help content providers deliver content to end users. Traditionally, Netflix has used CDNs like Akamai and Limelight to deliver its content to customers. The dispute began shortly after Level 3 beat out these CDN providers for the Netflix contract.



The crucial thing to note here is that CDNs can save Comcast, and other broadband retailers, a boatload of money. In a CDN-free world, content providers like Netflix would send thousands of identical copies of its content to Comcast customers, consuming Comcast's bandwidth and maybe even forcing Comcast to pay transit fees to its upstream providers.

Akamai reportedly installs its caching servers at various points inside the networks of retailers like Comcast. Only a single copy of the content is sent from the Netflix server to each Akamai cache; customers then access the content from the caches. Because these caches are inside Comcast's network, they never require Comcast to pay for transit to receive them. And because there are many caches distributed throughout Comcast's network (to improve performance), content delivered by them is less likely to consume bandwidth on expensive long-haul connections.

Now Level 3 wants to enter the CDN marketplace, but it decides to pursue a different strategy. For Akamai, deploying its servers inside of Comcast's network saves both Comcast and Akamai money, because Akamai would otherwise have to pay a third party to carry its traffic to Comcast. But as a tier 1 provider, Level 3 doesn't have to pay anyone for connectivity, and indeed in many cases third parties pay them for connectivity. Hence, placing the Level 3 servers inside of the Level 3 network is not only easier for Level 3, but in some cases it might actually generate extra revenue, as Level 3's customers have to pay for the extra traffic.

This dynamic might explain the oft-remarked-upon fact that Comcast seems to be simultaneously a peer and a customer of Level 3. Comcast pays Level 3 to carry traffic to and from distant networks that Comcast's own network does not reach—doing so is cheaper than building its own worldwide backbone network. But Comcast is less enthusiastic about paying Level 3 for traffic that originates from Level 3's own network. (known as "on-net" traffic)

And even if Comcast isn't paying for Level 3's CDN traffic, it's still not hard to understand Comcast's irritation. When two companies sign a peering agreement, the assumption is typically that each party is doing roughly half the "work" of hauling the bits from source to destination. But in this case, because the bits are being generated by Level 3's CDN servers, the bits are traveling almost entirely over Comcast's network.

Hauling traffic all the way from the peering point to Comcast's customers will consume more of Comcast's network resources than hauling traffic from Akamai's distributed CDN servers did. And to add insult to injury, Level 3 apparently only gave Comcast a few weeks' notice of the impending traffic spike. So faced with the prospect of having to build additional infrastructure to accommodate this new, less efficient method for delivering Netflix bits to Comcast customers, Comcast asked Level 3 to help cover the costs.

Of course, another way to look at this is to say that Comcast (and other retailers like AT&T and Time Warner) brought the situation on themselves by over-charging Akamai for connectivity. I've read conflicting reports about whether and how much Comcast has traditionally charged Akamai for access to its network (presumably these details are trade secrets), but some people have suggested that Comcast charges Akamai for bandwidth and cabinet space even when their servers are deep inside Comcast's own network. If that's true, it may be penny wise and pound foolish on Comcast's part, because if Akamai is not able to win big customers like Netflix, then Comcast will have to pay to haul that traffic halfway across the Internet itself.

In my next post I'll tell a different story that casts Comcast in a less flattering light.

 
Syndicate content