IELTS Azərbaycan| Xaricdə Təhsil | Dil Kursları

IELTS Azərbaycan| Xaricdə Təhsil | Dil Kursları

IELTS Azərbaycan| Xaricdə Təhsil | Dil Kursları

IELTS Azərbaycan| Xaricdə Təhsil | Dil Kursları

There has always been a sense in which America and Europe owned film

There has always been a sense in which America and Europe owned film. They invented it at the end of the nineteenth century in unfashionable places like New Jersey, Leeds and the suburbs of Lyons. At first, they saw their clumsy new camera-projectors merely as more profitable versions of Victorian lantern shows, mechanical curiosities which might have a use as a sideshow at a funfair. Then the best of the pioneers looked beyond the fairground properties of their invention. A few directors, now mostly forgotten, saw that the flickering new medium was more than just a diversion. This crass commercial invention gradually began to evolve as an art. D W Griffith in California glimpsed its grace, German directors used it as an analogue to the human mind and the modernising city, Soviets emphasised its agitational and intellectual properties, and the Italians reconfigured it on an operatic scale.

So heady were these first decades of cinema that America and Europe can be forgiven for assuming that they were the only game in town. In less than twenty years western cinema had grown out of all recognition; its unknowns became the most famous people in the world; it made millions. It never occurred to its financial backers that another continent might borrow their magic box and make it its own. But film industries were emerging in Shanghai, Bombay and Tokyo, some of which would outgrow those in the west.

Between 1930 and 1935, China produced more than 500 films, mostly conventionally made in studios in Shanghai, without soundtracks. China's best directors - Bu Wancang and Yuan Muzhi - introduced elements of realism to their stories. The Peach Girl (1931) and Street Angel (1937) are regularly voted among the best ever made in the country.

India followed a different course. In the west, the arrival of talkies gave birth to a new genre - the musical - but in India, every one of the 5000 films made between 1931 and the mid-1950s had musical interludes. The films were stylistically more wide ranging than the western musical, encompassing realism and escapist dance within individual sequences, and they were often three hours long rather than Hollywood's 90 minutes. The cost of such productions resulted in a distinctive national style of cinema. They were often made in Bombay, the centre of what is now known as 'Bollywood'. Performed in Hindi (rather than any of the numerous regional languages), they addressed social and peasant themes in an optimistic and romantic way and found markets in the Middle East, Africa and the Soviet Union.

In Japan, the film industry did not rival India's in size but was unusual in other ways. Whereas in Hollywood the producer was the central figure, in Tokyo the director chose the stories and hired the producer and actors. The model was that of an artist and his studio of apprentices. Employed by a studio as an assistant, a future director worked with senior figures, learned his craft, gained authority, until promoted to director with the power to select screenplays and performers. In the 1930s and 40s, this freedom of the director led to the production of some of Asia's finest films. 
The films of Kenji Mizoguchi were among the greatest of these. Mizoguchi's films were usually set in the nineteenth century and analysed the way in which the lives of the female characters whom he chose as his focus were constrained by the society of the time. From Osaka Elegy (1936) to Ugetsu Monogatari (1953) and beyond, he evolved a sinuous way of moving his camera in and around a scene, advancing towards significant details but often retreating at moments of confrontation or strong feeling. No one had used the camera with such finesse before. 
Even more important for film history, however, is the work of the great Ozu. Where Hollywood cranked up drama, Ozu avoided it. His camera seldom moved. It nestled at seated height, framing people square on, listening quietly to their words. Ozu rejected the conventions of editing, cutting not on action, as is usually done in the west, but for visual balance. Even more strikingly, Ozu regularly cut away from his action to a shot of a tree or a kettle or clouds, not to establish a new location but as a moment of repose. Many historians now compare such 'pillow shots' to the Buddhist idea that mu - empty space or nothing - is itself an element of composition.

As the art form most swayed by money and market, cinema would appear to be too busy to bother with questions of philosophy. The Asian nations proved and are still proving that this is not the case. Just as deep ideas about individual freedom have led to the aspirational cinema of Hollywood, so it is the beliefs which underlie cultures such as those of China and Japan that explain the distinctiveness of Asian cinema at its best. Yes, these films are visually striking, but it is their different sense of what a person is, and what space and action are, which makes them new to western eye.

 

THE SUN

Imagine a world where the sun never sets. Children can laugh and play in the streets all through the night. Fishermen enjoy 24 hours of daylight on the open sea. To get any sleep, people must block all the light from their windows.

Now imagine a world with only darkness. Even in the middle of the day, the sun does not shine. The only light comes from the moon and the stars in the black sky. Cars must drive with their lights on all the time. When people awake in the morning, it looks like the middle of the night.

This is the situation for people who live above the Arctic Circle. The sun clearly influences their lives. This includes people in northern Russia, Canada, Alaska and Greenland. For part of the year they cannot see the sun. And part of the year the sun never disappears.

But do you ever think about the sun? All life depends on the power of the sun. Year after year, the sun warms the earth, gives us light, builds life on our planet, and even keeps us healthy.

Whatever early people thought about the sun, they did not know much about it. But as people began to use science they learned more about the sun. In 1543, Nicholas Copernicus demonstrated that the earth travels around the sun. One hundred years later, scientists estimated the distance to the sun. And as recently as 1904, a man named Ernest Rutherford showed how the sun produced such large amounts of heat. These people discovered that the sun is a star like all the other stars in the sky. However, for our planet, it is a very special star.

The earth is 150 million kilometers from the sun. Here is one way to imagine this great distance. Imagine that you are standing on the sun. Your friends are on the earth. If they turned on a light, it would take eight minutes for you to see it! But this is the perfect distance for the earth to use the sun's heat.

The temperature of the sun is around 6,000 degrees Celsius at its surface, and 15 million degrees at its centre! If the earth were any closer, we would burn. But if the earth were any further away, we would freeze. And yet, the sun is more than a big heater.

The sun also helps provide us with fresh air. The sun heats the oceans. Then the water heats the air. The changing air temperatures create most of the world's wind. Wind moves air to different places so plants can remove carbon dioxide from the air and create oxygen.

But the sun also affects plants directly. The sun makes plants grow through the process of photosynthesis. Plants can change light from the sun into energy. They use the energy to grow bigger and stronger. All life on earth depends on plants. Without the sun, we could not grow food for ourselves or for our animals.

Plants are not the only things who capture the power of the sun. Human can turn sunlight into electricity with solar cells. A solar cell collects the power of the sun and stores it. Then, this power can be used to run anything that uses electricity: cars, computers, or homes.

Besides all these amazing things, the sun also helps us to do something very simple, but needed. Without the sun, we would not be able to see anything!

The sun also helps people to be healthy and strong. It acts as a natural cleaner for our skin. The sun can help kill harmful bacteria that live on our skin. And the sun helps our bodies produce vitamin D. People need vitamin D to have strong bones.

The sun can also improve our mental health. In places where the sun does not shine, people can suffer from seasonal affective disorder. This is a kind of depression. People with season affective disorder do not have energy and feel sad. They are treated by sitting near a special light. But nothing is as good as being in real sunlight. Sunlight can help prevent depression and keep people happy. When the sun is shining, people have more hope about the future.

The sun does many other things as well. It helps us tell time. It controls the where and when animals travel. The sun's gravity keeps the planet in orbit. It even lets us see at night. This is because the sun shines on the moon and the moon sends the light down to the earth. The sun makes the colors of a rainbow after it rains. And it paints the sky during a sunset.

There are many things we still do not know about the sun. But the more we learn about the sun, the more we can thank God for giving us this wonderful gift.

LAND OF THE RISING SUN

Japan has a significantly better record in terms of average mathematical attainment than England and Wales. Large sample international comparisons of pupils' attainments since the 1960s have established that not only did Japanese pupils at age 13 have better scores of average attainment, but there was also a larger proportion of 'low' attainers in England, where, incidentally, the variation in attainment scores was much greater. The percentage of Gross National Product spent on education is reasonably similar in the two countries, so how is this higher and more consistent attainment in maths achieved?

Lower secondary schools in Japan cover three school years, from the seventh grade (age 13) to the ninth grade (age 15). Virtually all pupils at this stage attend state schools: only 3 per cent are in the private sector. Schools are usually modem in design, set well back from the road and spacious inside. Classrooms are large and pupils sit at single desks in rows. Lessons last for a standardised 50 minutes and are always followed by a 10-minute break, which gives the pupils a chance to let off steam. Teachers begin with a formal address and mutual bowing, and then concentrate on whole-class teaching.

Classes are large - usually about 40 - and are unstreamed. Pupils stay in the same class for all lessons throughout the school and develop considerable class identity and loyalty. Pupils attend the school in their own neighbourhood, which in theory removes ranking by school. In practice in Tokyo, because of the relative concentration of schools, there is some competition to get into the 'better' school in a particular area.

Traditional ways of teaching form the basis of the lesson and the remarkably quiet classes take their own notes of the points made and the examples demonstrated. Everyone has their own copy of the textbook supplied by the central education authority, Monbusho, as part of the concept of free compulsory education up to the age of 15. These textbooks are, on the whole, small, presumably inexpensive to produce, but well set out and logically developed. (One teacher was particularly keen to introduce colour and pictures into maths textbooks: he felt this would make them more accessible to pupils brought up in a cartoon culture.) Besides approving textbooks, Monbusho also decides the highly centralised national curriculum and how it is to be delivered.

Lessons all follow the same pattern. At the beginning, the pupils put solutions to the homework on the board, then the teachers comment, correct or elaborate as necessary. Pupils mark their own homework: this is an important principle in Japanese schooling as it enables pupils to see where and why they made a mistake, so that these can be avoided in future. No one minds mistakes or ignorance as long as you are prepared to learn from them.

After the homework has been discussed, the teacher explains the topic of the lesson, slowly and with a lot of repetition and elaboration. Examples are demonstrated on the board; questions from the textbook are worked through first with the class, and then the class is set questions from the textbook to do individually. Only rarely are supplementary worksheets distributed in a maths class. The impression is that the logical nature of the textbooks and their comprehensive coverage of different types of examples, combined with the relative homogeneity of the class, renders work sheets unnecessary. At this point, the teacher would circulate and make sure that all the pupils were coping well.

It is remarkable that large, mixed-ability classes could be kept together for maths throughout all their compulsory schooling from 6 to 15. Teachers say that they give individual help at the end of a lesson or after school, setting extra work if necessary. In observed lessons, any strugglers would be assisted by the teacher or quietly seek help from their neighbour. Carefully fostered class identity makes pupils keen to help each other - anyway, it is in their interests since the class progresses together.

This scarcely seems adequate help to enable slow learners to keep up. However, the Japanese attitude towards education runs along the lines of 'if you work hard enough, you can do almost anything'. Parents are kept closely informed of their children's progress and will play a part in helping their children to keep up with class, sending them to 'Juku' (private evening tuition) if extra help is needed and encouraging them to work harder. It seems to work, at least for 95 per cent of the school population.

So what are the major contributing factors in the success of maths teaching? Clearly, attitudes are important. Education is valued greatly in Japanese culture; maths is recognised as an important compulsory subject throughout schooling; and the emphasis is on hard work coupled with a focus on accuracy.

Other relevant points relate to the supportive attitude of a class towards slower pupils, the lack of competition within a class, and the positive emphasis on learning for oneself and improving one's own standard. And the view of repetitively boring lessons and learning the facts by heart, which is sometimes quoted in relation to Japanese classes, may be unfair and unjustified. No poor maths lessons were observed. They were mainly good and one or two were inspirational.

 

Striking Back at Lightning With Lasers

Seldom is the weather more dramatic than when thunderstorms strike. Their electrical fury inflicts death or serious injury on around 500 people each year in the United States alone. As the clouds roll in, a leisurely round of golf can become a terrifying dice with death - out in the open, a lone golfer may be a lightning bolt’s most inviting target. And there is damage to property too. Lightning damage costs American power companies more than $100 million a year.

But researchers in the United States and Japan are planning to hit back. Already in laboratory trials they have tested strategies for neutralising the power of thunderstorms, and this winter they will brave real storms, equipped with an armoury of lasers that they will be pointing towards the heavens to discharge thunderclouds before lightning can strike.

The idea of forcing storm clouds to discharge their lightning on command is not new. In the early 1960s, researchers tried firing rockets trailing wires into thunderclouds to set up an easy discharge path for the huge electric charges that these clouds generate. The technique survives to this day at a test site in Florida run by the University of Florida, with support from the Electrical Power Research Institute (EPRI), based in California. EPRI, which is funded by power companies, is looking at ways to protect the United States’ power grid from lightning strikes. ‘We can cause the lightning to strike where we want it to using rockets,’ says Ralph Bernstein, manager of lightning projects at EPRI. The rocket site is providing precise measurements of lightning voltages and allowing engineers to check how electrical equipment bears up.

Bad behaviour

But while rockets are fine for research, they cannot provide the protection from lightning strikes that everyone is looking for. The rockets cost around $1,200 each, can only be fired at a limited frequency and their failure rate is about 40 per cent. And even when they do trigger lightning, things still do not always go according to plan. ‘Lightning is not perfectly well behaved,’ says Bernstein. ‘Occasionally, it will take a branch and go someplace it wasn’t supposed to go.’

And anyway, who would want to fire streams of rockets in a populated area? ‘What goes up must come down,’ points out Jean-Claude Diels of the University of New Mexico. Diels is leading a project, which is backed by EPRI, to try to use lasers to discharge lightning safely- and safety is a basic requirement since no one wants to put themselves or their expensive equipment at risk. With around $500,000 invested so far, a promising system is just emerging from the laboratory.

The idea began some 20 years ago, when high-powered lasers were revealing their ability to extract electrons out of atoms and create ions. If a laser could generate a line of ionisation in the air all the way up to a storm cloud, this conducting path could be used to guide lightning to Earth, before the electric field becomes strong enough to break down the air in an uncontrollable surge. To stop the laser itself being struck, it would not be pointed straight at the clouds. Instead it would be directed at a mirror, and from there into the sky. The mirror would be protected by placing lightning conductors close by. Ideally, the cloud-zapper (gun) would be cheap enough to be installed around all key power installations, and portable enough to be taken to international sporting events to beam up at brewing storm clouds.

A stumbling block

However, there is still a big stumbling block. The laser is no nifty portable: it’s a monster that takes up a whole room. Diels is trying to cut down the size and says that a laser around the size of a small table is in the offing. He plans to test this more manageable system on live thunderclouds next summer. Bernstein says that Diels’s system is attracting lots of interest from the power companies.

But they have not yet come up with the $5 million that EPRI says will be needed to develop a commercial system, by making the lasers yet smaller and cheaper. I cannot say I have money yet, but I’m working on it,’ says Bernstein. He reckons that the forthcoming field tests will be the turning point - and he’s hoping for good news. Bernstein predicts ‘an avalanche of interest and support’ if all goes well. He expects to see cloud-zappers eventually costing $50,000 to $100,000 each.

Other scientists could also benefit. With a lightning ‘switch’ at their fingertips, materials scientists could find out what happens when mighty currents meet matter. Diels also hopes to see the birth of ‘interactive meteorology’ - not just forecasting the weather but controlling it. ‘If we could discharge clouds, we might affect the weather,’ he says.

And perhaps, says Diels, we’ll be able to confront some other meteorological menaces. ‘We think we could prevent hail by inducing lightning,’ he says. Thunder, the shock wave that comes from a lightning flash, is thought to be the trigger for the torrential rain that is typical of storms. A laser thunder factory could shake the moisture out of clouds, perhaps preventing the formation of the giant hailstones that threaten crops. With luck, as the storm clouds gather this winter, laser-toting researchers could, for the first time, strike back.

 

 

Environmental Management

The role of governments in environmental management is difficult but inescapable. Sometimes, the state tries to manage the resources it owns, and does so badly. Often, however, governments act in an even more harmful way. They actually subsidise the exploitation and consumption of natural resources. A whole range of policies, from farm- price support to protection for coal-mining, do environmental damage and (often) make no economic sense. Scrapping them offers a two-fold bonus: a cleaner environment and a more efficient economy. Growth and environmentalism can actually go hand in hand, if politicians have the courage to confront the vested interest that subsidies create.

No activity affects more of the earth's surface than farming. It shapes a third of the planet's land area, not counting Antarctica, and the proportion Is rising. World food output per head has risen by 4 per cent between the 1970s and 1980s mainly as a result of increases in yields from land already in cultivation, but also because more land has been brought under the plough. Higher yields have been achieved by increased irrigation, better crop breeding, and a doubling in the use of pesticides and chemical fertilisers in the 1970s and 1980s.

All these activities may have damaging environmental impacts. For example, land clearing for agriculture is the largest single cause of deforestation; chemical fertilisers and pesticides may contaminate water supplies; more intensive farming and the abandonment of fallow periods tend to exacerbate soil erosion; and the spread of mono-Culture and use of high-yielding varieties of crops have been accompanied by the disappearance of old varieties of food plants which might have provided some insurance against pests or diseases in future. Soil erosion threatens the productivity of land In both rich and poor countries. The United States, where the most careful measurements have been done, discovered in 1982 that about one-fifth of its farmtand as losing topsoil at a rate likely to diminish the soil's productivity. The country subsequently embarked upon a program to convert 11 per cent of its cropped land to meadow or forest. Topsoil in India and China is vanishing much faster than in America.

Government policies have frequently compounded the environmental damage that farming can cause. In the rich countries, subsidies for growing crops and price supports for farm output drive up the price of land.The annual value of these subsidies is immense: about $250 billion, or more than all World Bank lending in the 1980s.To increase the output of crops per acre, a farmer's easiest option is to use more of the most readily available inputs: fertilisers and pesticides. Fertiliser use doubled in Denmark in the period 1960-1985 and increased in The Netherlands by 150 per cent. The quantity of pesticides applied has risen too; by 69 per cent In 1975-1984 in Denmark, for example, with a rise of 115 per cent in the frequency of application in the three years from 1981.

In the late 1980s and early 1990s some efforts were made to reduce farm subsidies. The most dramatic example was that of New Zealand, which scrapped most farm support in 1984. A study of the environmental effects, conducted in 1993, found that the end of fertiliser subsidies had been followed by a fall in fertiliser use (a fall compounded by the decline in world commodity prices, which cut farm incomes). The removal of subsidies also stopped land-clearing and over-stocking, which in the past had been the principal causes of erosion. Farms began to diversify. The one kind of subsidy whose removal appeared to have been bad for the environment was the subsidy to manage soil eroslon,

In less enlightened countries, and in the European Union, the trend has been to reduce rather than eliminate subsidies, and to introduce new payments to encourage farmers to treat their land In environmentally friendlier ways, or to leave it follow. It may sound strange but such payments need to be higher than the existing incentives for farmers to grow food crops. Farmers, however, dislike being paid to do nothing. In several countries they have become interested in the possibility of using fuel produced from crop residues either as a replacement for petrol (as ethanol) or as fuel for power stations (as biomass). Such fuels produce far less carbon dioxide than coal or oil, and absorb carbon dioxide as they grow.They are therefore less likely to contribute to the greenhouse effect. But they die rarely competitive with fossil fuels unless subsidised - and growing them does no less environmental harm than other crops.

In poor countries, governments aggravate other sorts of damage. Subsidies for pesticides and artificial fertilisers encourage farmers to use greater quantities than are needed to get the highest economic crop yield. A study by the International Rice Research Institute Of pesticide use by farmers in South East Asia found that, with pest-resistant varieties of rice, even moderate applications of pesticide frequently cost farmers more than they saved.Such waste puts farmers on a chemical treadmill: bugs and weeds become resis-tant to poisons, so next year's poisons must be more lethal. One cost is to human health, Every year some 10,000 people die from pesticide poisoning, almost all of them in the developing countries, and another 400,000 become seriously ill. As for artificial fertilisers, their use world-wide increased by 40 per cent per unit of farmed land between the mid 1970s and late 1980s, mostly in the developing countries. Overuse of fertilisers may cause farmers to stop rotating crops or leaving their land fallow. That, In turn, may make soil erosion worse.

A result of the Uruguay Round of world trade negotiations Is likely to be a reduction of 36 per cent In the average levels of farm subsidies paid by the rich countries in 1986-1990. Some of the world's food production will move from Western Europe to regions where subsidies are lower or non-existent, such as the former communist countries and parts of the developing world. Some environmentalists worry about this outcome. It will undoubtedly mean more pressure to convert natural habitat into farmland. But it will also have many desirable environmental effects. The intensity of farming in the rich world should decline, and the use of chemical inputs will diminish. Crops are more likely to be grown p the environments to which they are naturally suited. And more farmers in poor countries wilt have the money and the incentive to manage their land in ways that are sustainable in the long run. That is important. To feed an increasingly hungry world, farmers need every incentive to use their soil and water effectively and efficiently.

 

 

The Truth about the Environment

For many environmentalists, the world seems to be getting worse. They have developed a hit-list of our main fears: that natural resources are running out; that the population is ever growing, leaving less and less to eat; that species are becoming extinct in vast numbers, and that the planet's air and water are becoming ever more polluted.

But a quick look at the facts shows a different picture. First, energy and other natural resources have become more abundant, not less so, since the book The Limits to Growth' was published in 1972 by a group of scientists. Second, more food is now produced per head of the world's population than at any time in history. Fewer people are starving. Third, although species are .indeed becoming extinct, only about 0.7% of them are expected to disappear in the next 50 years, not 25-50%, as has so often been predicted. And finally, most forms of environmental pollution either appear to have been exaggerated, or are transient - associated with the early phases of industrialisation and therefore best cured not by restricting economic growth, but by accelerating it. One form of pollution - the release of greenhouse gases that causes global warming - does appear to be a phenomenon that is going to extend well into our future, but its total impact is unlikely to pose a devastating problem. A bigger problem may well turn out to be an inappropriate response to it.

Yet opinion polls suggest that many people nurture the belief that environmental standards are declining and four factors seem to cause this disjunction between perception and reality.

One is the lopsidedness built into scientific research. Scientific funding goes mainly to areas with many problems. That may be wise policy, but it will also create an impression that many more potential problems exist than is the case.

Secondly, environmental groups need to be noticed by the mass media. They also need to keep the money rolling in. Understandably, perhaps, they sometimes overstate their arguments. In 1997, for example, the World Wide Fund for Nature issued a press release entitled: 'Two thirds of the world's forests lost forever'. The truth turns out to be nearer 20%.

Though these groups are run overwhelmingly by selfless folk, they nevertheless share many of the characteristics of other lobby groups. That would matter less if people applied the same degree of scepticism to environmental lobbying as they do to lobby groups In other fields. A trade organisation arguing for, say, weaker pollution controls is instantly seen as self-interested. Yet a green organisation opposing such a weakening is seen as altruistic, even if an impartial view of the controls in question might suggest they are doing more harm than good.

A third source of confusion is the attitude of the media. People are clearly more curious about bad news than good. Newspapers and broadcasters are there to provide what the public wants. That, however, can lead to significant distortions of perception. An example was America's encounter with El Nino in 1997 and 1998. This climatic phenomenon was accused of wrecking tourism, causing allergies, melting the ski-slopes and causing 22 deaths. However, according to an article in the Bulletin of the American Meteorological Society, the damage it did was estimated at US$4 billion but the benefits amounted to some US$19 billion. These came from higher winter temperatures (which saved an estimated 850 lives, reduced heating costs and diminished spring floods caused by meltwaters).

The fourth factor is poor individual perception. People worry that the endless rise in the amount of stuff everyone throws away will cause the world to run out of places to dispose of waste. Yet, even if America's trash output continues to rise as it has done in the past, and even if the American population doubles by 2100, all the rubbish America produces through the entire 21st century will still take up only one-12.000th of the area of the entire United States.

So what of global warming? As we know, carbon dioxide emissions are causing the planet to warm. The best estimates are that the temperatures will rise by 2-3°C in this century, causing considerable problems, at a total cost of US$5,000 billion.

Despite the intuition that something drastic needs to be done about such a costly problem, economic analyses clearly show it will be far more expensive to cut carbon dioxide emissions radically than to pay the costs of adaptation to the increased temperatures. A model by one of the main authors of the United Nations Climate Change Panel shows how an expected temperature increase of 2.1 degrees in 2100 would only be diminished to an increase of 1.9 degrees. Or to put ft another way, the temperature increase that the planet would have experienced in 2094 would be postponed to 2100.

So this does not prevent global warming, but merely buys the world six years. Yet the cost of reducing carbon dioxide emissions, for the United States alone, will be higher than the cost of solving the world's single, most pressing health problem: providing universal access to clean drinking water and sanitation. Such measures would avoid 2 million deaths every year, and prevent half a billion people from becoming seriously ill.

It is crucial that we look at the facts if we want to make the best possible decisions for the future. It may be costly to be overly optimistic - but more costly still to be too pessimistic. 

 

 

The effects of light on plant and animal species

Light is important to organisms for two different reasons. Firstly it Is used as a cue for the timing of daily and seasonal rhythms in both plants and animals, and secondly it is used to assist growth in plants.

Breeding in most organisms occurs during a part of the year only, and so a reliable cue is needed to trigger breeding behaviour. Day length is an excellent cue, because it provides a perfectly predictable pattern of change within the year. In the temperate zone in spring, temperatures fluctuate greatly from day to day. but day length increases steadily by a predictable amount. The seasonal impact of day length on physiological responses is called photoperiodism, and the amount of experimental evidence for this phenomenon is considerable. For example, some species of birds’ breeding can be induced even in midwinter simply by increasing day length artificially (Wolfson 1964). Other examples of photoperiodism occur in plants. A short-day plant flowers when the day is less than a certain critical length. A long-day plant flowers after a certain critical day length is exceeded. In both cases the critical day length differs from species to species. Plants which flower after a period of vegetative growth, regardless of photoperiod, are known as day-neutral plants.

Breeding seasons in animals such as birds have evolved to occupy the part of the year in which offspring have the greatest chances of survival. Before the breeding season begins, food reserves must be built up to support the energy cost of reproduction, and to provide for young birds both when they are in the nest and after fledging. Thus many temperate-zone birds use the increasing day lengths in spring as a cue to begin the nesting cycle, because this is a point when adequate food resources will be assured.

The adaptive significance of photoperiodism in plants is also clear. Short-day plants that flower in spring in the temperate zone are adapted to maximising seedling growth during the growing season. Long-day plants are adapted for situations that require fertilization by insects, or a long period of seed ripening. Short-day plants that flower in the autumn in the temperate zone are able to build up food reserves over the growing season and over winter as seeds. Day-neutral plants have an evolutionary advantage when the connection between the favourable period for reproduction and day length is much less certain. For example, desert annuals germinate, flower and seed whenever suitable rainfall occurs, regardless of the day length. 

The breeding season of some plants can be delayed to extraordinary lengths. Bamboos are perennial grasses that remain in a vegetative state for many years and then suddenly flower, fruit and die (Evans 1976). Every bamboo of the species Chusquea abietifolio on the island of Jamaica flowered, set seed and died during 1884. The next generation of bamboo flowered and died between 1916 and 1918, which suggests a vegetative cycle of about 31 years. The climatic trigger for this flowering cycle is not yet known, but the adaptive significance is clear. The simultaneous production of masses of bamboo seeds (in some cases lying 12 to 15 centimetres deep on the ground) is more than all the seed-eating animals can cope with at the time, so that some seeds escape being eaten and grow up to form the next generation (Evans 1976).

The second reason light is important to organisms is that it is essential for photosynthesis. This is the process by which plants use energy from the sun to convert carbon from soil or water into organic material for growth. The rate of photosynthesis in a plant can be measured by calculating the rate of its uptake of carbon. There is a wide range of photosynthetic responses of plants to variations in light intensity. Some plants reach maximal photosynthesis at one-quarter full sunlight, and others, like sugarcane, never reach a maximum, but continue to increase photosynthesis rate as light intensity rises.

Plants in general can be divided into two groups: shade-tolerant species and shade-intolerant species. This classification is commonly used in forestry and horticulture. Shade-tolerant plants have lower photosynthetic rates and hence have lower growth rates than those of shade-intolerant species. Plant species become adapted to living in a certain kind of habitat, and in the process evolve a series of characteristics that prevent them from occupying other habitats. Grime (1966) suggests that light may be one of the major components directing these adaptations. For example, eastern hemlock seedlings are shade-tolerant. They can survive in the forest understorey under very low light levels because they have a low photosynthetic rate.  

 

 

THE WILD SIDE OF TOWN

The countryside is no longer the place to see wildlife, according to Chris Barnes. These days you are more likely to find impressive numbers of skylarks, dragonflies and toads in your own back garden.

The past half century has seen an interesting reversal in the fortunes of much of Britain's wildlife. Whilst the rural countryside has become poorer and poorer, wildlife habitat in towns has burgeoned. Now, if you want to hear a deafening dawn chorus of birds or familiarise yourself with foxes, you can head for the urban forest.

Whilst species that depend on wide open spaces such as the hare, the eagle and the red deer may still be restricted to remote rural landscapes, many of our wild plants and animals find the urban ecosystem ideal. This really should be no surprise, since it is the fragmentation and agrochemical pollution in the farming lowlands that has led to the catastrophic decline of so many species.

By contrast, most urban open spaces have escaped the worst of the pesticide revolution, and they are an intimate mosaic of interconnected habitats. Over the years, the cutting down of hedgerows on farmland has contributed to habitat isolation and species loss. In towns, the tangle of canals, railway embankments, road verges and boundary hedges lace the landscape together, providing first-class ecological corridors for species such as hedgehogs, kingfishers and dragonflies.

Urban parks and formal recreation grounds are valuable for some species, and many of them are increasingly managed with wildlife in mind. But in many places their significance is eclipsed by the huge legacy of post-industrial land demolished factories, waste tips, quarries, redundant railway yards and other so-called ‘brownfield’ sites. In Merseyside, South Yorkshire and the West Midlands, much of this has been spectacularly colonised with birch and willow woodland, herb-rich grassland and shallow wetlands. As a consequence, there are song birds and predators in abundance over these once-industrial landscapes.

There are fifteen million domestic gardens in the UK. and whilst some are still managed as lifeless chemical war zones, most benefit the local wildlife, either through benign neglect or positive encouragement. Those that do best tend to be woodland species, and the garden lawns and flower borders, climber-covered fences, shrubberies and fruit trees are a plausible alternative. Indeed, in some respects gardens are rather better than the real thing, especially with exotic flowers extending the nectar season. Birdfeeders can also supplement the natural seed supply, and only the millions of domestic cats may spoil the scene.

As Britain’s gardeners have embraced the idea of ‘gardening with nature’, wildlife’s response has been spectacular. Between 1990 and the year 2000. the number of different bird species seen at artificial feeders in gardens increased from 17 to an amazing 81. The BUGS project (Biodiversity in Urban Gardens in Sheffield) calculates that there are 25.000 garden ponds and 100.000 nest boxes in that one city alone.

We are at last acknowledging that the wildlife habitat in towns provides a valuable life support system. The canopy of the urban forest is filtering air pollution, and intercepting rainstorms, allowing the water to drip more gradually to the ground. Sustainable urban drainage relies on ponds and wetlands to contain storm water runoff, thus reducing the risk of flooding, whilst reed beds and other wetland wildlife communities also help to clean up the water. We now have scientific proof that contact with wildlife close to home can help to reduce stress and anger. Hospital patients with a view of natural green space make a more rapid recovery and suffer less pain.

Traditionally, nature conservation in the UK has been seen as marginal and largely rural. Now we are beginning to place it at the heart of urban environmental and economic policy. There are now dozens of schemes to create new habitats and restore old ones in and around our big cities. Biodiversity is big in parts of London. thanks to schemes such as the London Wetland Centre in the south west of the city.

This is a unique scheme masterminded by the Wildfowl and Wetlands Trust to create a wildlife reserve out of a redundant Victorian reservoir. Within five years of its creation the Centre has been hailed as one of the top sites for nature in England and made a Site of Special Scientific Interest. It consists of a 105-acre wetland site, which is made up of different wetland habitats of shallow, open water and grazing marsh. The site attracts more than 104 species of bird, including nationally important rarities like the bittern.

We need to remember that if we work with wildlife, then wildlife will work for us and this is the very essence of sustainable development.

 

Green Wave Washes Over Mainstream Shopping

Research in Britain has shown that green consumers' continue to flourish as a significant group amongst shoppers. This suggests that politicians who claim environmentalism is yesterday's issue may be seriously misjudging the public mood.

A report from Mintel, the market research organisation, says that despite recession and financial pressures, more people than ever want to buy environmentally friendly products and a 'green wave' has swept through consumerism, taking in people previously untouched by environmental concerns. The recently published report also predicts that the process will repeat itself with 'ethical' concerns, involving issues such as fair trade with the Third World and the social record of businesses. Companies will have to be more honest and open in response to this mood.

Mintel's survey, based on nearly 1,000 consumers, found that the proportion who look for green products and are prepared to pay more for them has climbed from 53 per cent in 1990 to around 60 per cent in 1994. On average, they will pay 13 per cent more for such products, although this percentage is higher among women, managerial and professional groups and those aged 35 to 44.

Between 1990 and 1994 the proportion of consumers claiming to be unaware of or unconcerned about green issues fell from 18 to 10 per cent but the number of green spenders among older people and manual workers has risen substantially. Regions such as Scotland have also caught up with the south of England in their environmental concerns. According to Mintel, the image of green consumerism as associated in the past with the more eccentric members of society has virtually disappeared. The consumer research manager for Mintel, Angela Hughes, said it had become firmly established as a mainstream market. She explained that as far as the average person is concerned environmentalism has not gone off the boil'. In fact, it has spread across a much wider range of consumer groups, ages and occupations.

Mintel's 1994 survey found that 13 per cent of consumers are 'very dark green', nearly always buying environmentally friendly products, 28 per cent are 'dark green', trying 'as far as possible' to buy such products, and 21 per cent are 'pale green' - tending to buy green products if they see them. Another 26 per cent are 'armchair greens'; they said they care about environmental issues but their concern does not affect their spending habits. Only 10 per cent say they do not care about green issues.

Four in ten people are 'ethical spenders', buying goods which do not, for example, involve dealings with oppressive regimes. This figure is the same as in 1990, although the number of 'armchair ethicals' has risen from 28 to 35 per cent and only 22 per cent say they are unconcerned now, against 30 per cent in 1990. Hughes claims that in the twenty-first century, consumers will be encouraged to think more about the entire history of the products and services they buy, including the policies of the companies that provide them and that this will require a greater degree of honesty with consumers.

Among green consumers, animal testing is the top issue - 48 per cent said they would be deterred from buying a product it if had been tested on animals - followed by concerns regarding irresponsible selling, the ozone layer, river and sea pollution, forest destruction, recycling and factory farming. However, concern for specific issues is lower than in 1990, suggesting that many consumers feel that Government and business have taken on the environmental agenda.

The history of the poster

The appearance of the poster has changed continuously over the past two centuries.

The first posters were known as ‘broadsides’ and were used for public and commercial announcements. Printed on one side only using metal type, they were quickly and crudely produced in large quantities. As they were meant to be read at a distance, they required large lettering.

There were a number of negative aspects of large metal type. It was expensive, required a large amount of storage space and was extremely heavy. If a printer did have a collection of large metal type, it was likely that there were not enough letters. So printers did their best by mixing and matching styles.

Commercial pressure for large type was answered with the invention of a system for wood type production. In 1827, Darius Wells invented a special wood drill - the lateral router - capable of cutting letters on wood blocks. The router was used in combination with William Leavenworth’s pantograpn (1834) to create decorative wooden letters of all shapes and sizes. The first posters began to appear, but they had little colour and design; often wooden type was mixed with metal type in a conglomeration of styles.

A major development in poster design was the application of lithography, invented by Alois Senefelder in 1796, which allowed artists to hand-draw letters, opening the field of type design to endless styles. The method involved drawing with a greasy crayon onto finely surfaced Bavarian limestone and offsetting that image onto paper. This direct process captured the artist's true intention; however, the final printed image was in reverse. The images and lettering needed to be drawn backwards, often reflected in a mirror or traced on transfer paper.

As a result of this technical difficulty, the invention of the lithographic process had little impact on posters until the 1860s, when Jules Cheret came up with his ‘three-stone lithographic process’. This gave artists the opportunity to experiment with a wide spectrum of colours.

Although the process was difficult, the result was remarkable, with nuances of colour impossible in other media even to this day. The ability to mix words and images in such an attractive and economical format finally made the lithographic poster a powerful innovation.

Starting in the 1870s, posters became the main vehicle for advertising prior to the magazine era and the dominant means of mass communication in the rapidly growing cities of Europe and America. Yet in the streets of Paris, Milan and Berlin, these artistic prints were so popular that they were stolen off walls almost as soon as they were hung. Cheret, later known as ‘the father of the modern poster’, organised the first exhibition of posters in 1884 and two years later published the first book on poster art. He quickly took advantage of the public interest by arranging for artists to create posters, at a reduced size, that were suitable for in-home display.

Thanks to Cheret. the poster slowly took hold in other countries in the 1890s and came to celebrate each society’s unique cultural institutions: the cafe in France, the opera and fashion in Italy, festivals in Spain, literature in Holland and trade fairs in Germany. The first poster shows were held in Great

Britain and Italy in 1894, Germany in 1896 and Russia in 1897. The most important poster show ever, to many observers, was held in Reims, France, in 1896 and featured an unbelievable 1,690 posters arranged by country.

In the early 20th century, the poster continued to play a large communication role and to go through a range of styles. By the 1950s, however, it had begun to share the spotlight with other media, mainly radio and print. By this time, most posters were printed using the mass production technique of photo offset, which resulted in the familiar dot pattern seen in newspapers and magazines. In addition, the use of photography in posters, begun in Russia in the twenties, started to become as common as illustration.

In the late fifties, a new graphic style that had strong reliance on typographic elements in black and white appeared. The new style came to be known as the International Typographic Style. It made use of a mathematical grid, strict graphic rules and black-and-white photography to provide a clear and logical structure. It became the predominant style in the world in the 1970s and continues to exert its influence today.

It was perfectly suited to the increasingly international post-war marketplace, where there was a strong demand for clarity. This meant that the accessibility of words and symbols had to be taken into account. Corporations wanted international identification, and events such as the Olympics called for universal solutions, which the Typographic Style could provide.

However, the International Typographic Style began to lose its energy in the late 1970s. Many criticised it for being cold, formal and dogmatic.

A young teacher in Basel. Wolfgang Weingart, experimented with the offset printing process to produce posters that appeared complex and chaotic, playful and spontaneous - all in stark contrast to what had gone before. Weingart's liberation of typography was an important foundation for several new styles. These ranged from Memphis and Retro to the advances now being made in computer graphics.

Adapted from www.internationalposter.com

What Is a Port City?

The port city provides a fascinating and rich understanding of the movement of people and qoods around the world. We understand a port as a centre of land-sea exchange, and as a major source of livelihood and a major force for cultural mixing. But do ports all produce a range of common urban characteristics which justify classifying port cities together under a single generic label? Do they have enough in common to warrant distinguishing them from other kinds of cities?

A port must be distinguished from a harbour. They are two very different things. Most ports have poor harbours, and many fine harbours see few ships. Harbour is a physical concept, a shelter for ships; port is an economic concept, a centre of land-sea exchange which requires good access to a hinterland even more than a sea-linked foreland. It is landward access, which is productive of goods for export and which demands imports, that is critical. Poor harbours can be improved with breakwaters and dredging if there is a demand for a port. Madras and Colombo are examples of harbours expensively improved by enlarging, dredging and building breakwaters.

Port cities become industrial, financial and service centres and political capitals because of their water connections and the urban concentration which arises there and later draws to it railways, highways and air routes. Water transport means cheap access, the chief basis of all port cities. Many of the world's biggest cities, for example, London, New York, Shanghai, Istanbul, Buenos Aires, Tokyo, Jakarta, Calcutta, Philadelphia and San Francisco began as ports - that is, with land-sea exchange as their major function - but they have since grown disproportionately in other respects so that their port functions are no longer dominant. They remain different kinds of places from non-port cities and their port functions account for that difference.

Port functions, more than anything else, make a city cosmopolitan. A port city is open to the world. In it races, cultures, and ideas, as well as goods from a variety of places, jostle, mix and enrich each other and the life of the city. The smell of the sea and the harbour, the sound of boat whistles or the moving tides are symbols of their multiple links with a wide world, samples of which are present in microcosm within their own urban areas.

Sea ports have been transformed by the advent of powered vessels, whose size and draught have increased. Many formerly important ports have become economically and physically less accessible as a result. By-passed by most of their former enriching flow of exchange, they have become cultural and economic backwaters or have acquired the character of museums of the past. Examples of these are Charleston, Salem, Bristol, Plymouth, Surat, Galle, Melaka, Soochow, and a long list of earlier prominent port cities in Southeast Asia, Africa and Latin America. 

Much domestic port trade has not been recorded. What evidence we have suggests that domestic trade was greater at all periods than external trade. Shanghai, for example, did most of its trade with other Chinese ports and inland cities. Calcutta traded mainly with other parts of India and so on. Most of any city's population is engaged in providing goods and services for the city itself. Trade outside the city is its basic function. But each basic worker requires food, housing, clothing and other such services. Estimates of the ratio of basic to service workers range from 1:4 to 1:8.

No city can be simply a port but must be involved in a variety of other activities. The port function of the city draws to it raw materials and distributes them in many other forms. Ports take advantage of the need for breaking up the bulk material where water and land transport meet and where loading and unloading costs can be minimised by refining raw materials or turning them into finished goods. The major examples here are oil refining and ore refining, which are commonly located at ports. It is not easy to draw a line around what is and is not a port function. All ports handle, unload, sort, alter, process, repack, and reship most of what they receive. A city may still be regarded as a port city when it becomes involved in a great range of functions not immediately involved with ships or docks.

Cities which began as ports retain the chief commercial and administrative centre of the city close to the waterfront. The centre of New York is in lower Manhattan between two river mouths, the City of London is on the Thames, Shanghai along the Bund. This proximity to water is also true of Boston, Philadelphia, Bombay, Calcutta, Madras, Singapore, Bangkok, Hong Kong and Yokohama, where the commercial, financial, and administrative centres are still grouped around their harbours even though each city has expanded into a metropolis. Even a casual visitor cannot mistake them as anything but port cities.

 

 

The construction of roads and bridges

Roads

Although there were highway links in Mesopotamia from as early as 3500 bc, the Romans were probably the first road-builders with fixed engineering standards. At the peak of the Roman Empire in the first century ad, Rome had road connections totalling about 85,000 kilometres.

Roman roads were constructed with a deep stone surface for stability and load-bearing. They had straight alignments and therefore were often hilly. The Roman roads remained the main arteries of European transport for many centuries, and even today many roads follow the Roman routes. New roads were generally of inferior quality, and the achievements of Roman builders were largely unsurpassed until the resurgence of road-building in the eighteenth century.

With horse-drawn coaches in mind, eighteenth-century engineers preferred to curve their roads to avoid hills. The road surface was regarded as merely a face to absorb wear, the load-bearing strength being obtained from a properly prepared and well-drained foundation. Immediately above this, the Scottish engineer John McAdam (1756-1836) typically laid crushed stone, to which stone dust mixed with water was added, and which was compacted to a thickness of just five centimetres, and then rolled. McAdam’s surface layer - hot tar onto which a layer of stone chips was laid - became known as ‘tarmacadam’, or tarmac. Roads of this kind were known as flexible pavements.

By the early nineteenth century - the start of the railway age - men such as John McAdam and Thomas Telford had created a British road network totalling some 200,000 km, of which about one sixth was privately owned toll roads called turnpikes. In the first half of the nineteenth century, many roads in the US were built to the new standards, of which the National Pike from West Virginia to Illinois was perhaps the most notable.

In the twentieth century, the ever-increasing use of motor vehicles threatened to break up roads built to nineteenth-century standards, so new techniques had to be developed.

On routes with heavy traffic, flexible pavements were replaced by rigid pavements, in which the top layer was concrete, 15 to 30 centimetres thick, laid on a prepared bed. Nowadays steel bars are laid within the concrete. This not only restrains shrinkage during setting, but also reduces expansion in warm weather. As a result, it is, possible to lay long slabs without danger of cracking.

The demands of heavy traffic led to the concept of high-speed, long-'distance roads, with access - or slip-lanes - spaced widely apart. The US Bronx River Parkway of 1925 was followed by several variants - Germany’s autobahns and the Pan American Highway. Such roads - especially the intercity autobahns with their separate multi-lane carriageways for each direction - were the predecessors of today’s motorways.

Bridges

The development by the Romans of the arched bridge marked the beginning of scientific bridge-building; hitherto, bridges had generally been crossings in the form of felled trees or flat stone blocks. Absorbing the load by compression, arched bridges are very strong. Most were built of stone,

but brick and timber were also used. A fine early example is at Alcantara in Spain, built of granite by the Romans in AD 105 to span the River Tagus. In modern times, metal and concrete arched bridges have been constructed. The first significant metal bridge, built of cast iron in 1779, still stands at Ironbridge in England.

Steel, with its superior strength-to-weight ratio, soon replaced iron in metal bridge-work. In the railway age, the truss (or girder) bridge became popular. Built of wood or metal, the truss beam consists of upper and lower horizontal booms joined by vertical or inclined members.

The suspension bridge has a deck supported by suspenders that drop from one or more overhead cables. It requires strong anchorage at each end to resist the inward tension of the cables, and the deck is strengthened to control distortion by moving loads or high winds. Such bridges are nevertheless light, and therefore the most suitable for very long spans. The Clifton Suspension Bridge in the UK, designed by Isambard Kingdom Brunei (1806—59) to span the Avon Gorge in England, is famous both for its beautiful setting and for its elegant design. The 1998 Akashi Kaikyo Bridge in Japan has a span of 1,991 metres, which is the longest to date.

Cantilever bridges, such as the 1889 Forth Rail Bridge in Scotland, exploit the potential of steel construction to produce a wide clearwater space. The spans have a central supporting pier and meet midstream. The downward thrust, where the spans meet, is countered by firm anchorage of the spans at their other ends. Although the suspension bridge can span a wider gap, the cantilever is relatively stable, and this was important for nineteenth-century railway builders. The world’s longest cantilever span - 549 metres - is that of the Quebec rail bridge in Canada, constructed in 1918.

 

 

The Pompidou Centre

More than three decades after it was built, the Pompidou Centre in Paris has survived its moment at the edge of architectural fashion and proved itself to be one of the most remarkable buildings of the 20th century.

It was the most outstanding now building constructed in Paris for two generations. It looked like an explosion of brightly coloured service pipes in the calm of the city centre. However, when in 1977 the architects Richard Rogers and Renzo Piano stood among a large crowd of 5,000 at the opening of the Centre Culturel d'Art Georges Pompidou (known as the Pompidou), no one was really aware of the significance of this unusual building.

Rogers was only 38 when he and Piano won the competition to design a new cultural centre for Paris in the old market site. Young, unknown architects, they had been chosen from a field of nearly 700 to design one of the most prestigious buildings of its day. After six difficult years, with 25,000 drawings, seven lawsuits, battles over budgets, and a desperate last-minute scramble to finish the building, it had finally been done.

Yet the opening was a downbeat moment. The Pompidou Centre had been rubbished by the critics while it was being built, there was no more work in prospect for the architects, and their partnership had effectively broken down. But this was just a passing crisis. The Centre, which combined the national museum of modern art, exhibition space, a public library and a centre for modern music, proved an enormous success. It attracted six million visitors in its first year, and with its success, the critics swiftly changed their tune.

The architects had been driven by the desire for ultimate flexibility, for a building that would not limit the movement of its users. All the different parts were approached through the same enormous entrance hall and served by the same escalator, which was free to anyone to ride, whether they wanted to visit an exhibition or just admire the view. With all the services at one end of the building, escalators and lifts at the other, and the floors hung on giant steel beams providing uninterrupted space the size of two football pitches, their dream had become a reality.

The image of the Pompidou pervaded popular culture in the 1970s, making appearances everywhere - on record-album covers and a table lamp, and even acting as the set for a James Bond 1 film. This did much to overcome the secretive nature of the architectural culture of its time, as it enabled wider audience to appreciate the style and content of the building and so moved away from the strictly professional view.

The following year, Rogers was commissioned to design a new headquarters for Lloyd's Bank in London and went on to create one of Britain's most dynamic architectural practices. Piano is now among the world's most respected architects. But what of their shared creation?

It was certainly like no previous museum, with its plans for a flexible interior that not only had movable walls but floors that could also be adjusted up or down. This second feature did not in the end survive when the competition drawings were turned into a real building. In other ways, however, the finished building demonstrated a remarkable degree of refinement - of craftsmanship even - in the way the original diagram was transformed into a superbly detailed structure. It was this quality which, according to some critics, suggested that the Pompidou should be seen as closer to the 19th-century engineering tradition than the space age.

Nevertheless, as a model for urban planning, it has proved immensely influential. The Guggenheim in Bilbao* and the many other major landmark projects that were built in the belief that innovatively designed cultural buildings can bring about urban renewal are all following the lead of the Pompidou Centre.

Other buildings may now challenge it for the title of Europe s most outlandish work of architecture. However, more than a quarter of a century later, this construction - it is hard to call it a building when there is no façade, just a lattice of steel beams and pipes and a long external escalator snaking up the outside - still seems extreme.

Today, the Pompidou Centre itself still looks much as it did when it opened. The shock value of its colour-coded plumbing and its structure has not faded with the years. But while traditionalists regarded it as an ugly attack on Paris when it was built, they now see it for what it is - an enormous achievement, technically and conceptually.

 

 

The Lake Erie Canal

Begun in 1817 and opened in its entirety in 1825, the Erie Canal is considered by some to be the engineering marvel of the nineteenth century. When the federal government concluded that the project was too ambitious to undertake, the State of New York took on the task of carving 363 miles of canal through the wilderness, with nothing but the muscle power of men and horses.

Once derided as ‘Clinton’s Folly’ for the Governor who lent his vision and political muscle to the project, the Erie Canal experienced unparalleled success almost overnight. The iconic waterway established settlement patterns for most of the United States during the nineteenth century, made New York the financial capital of the world, provided a critical supply line that helped the North win the Civil War, and precipitated a series of social and economic changes throughout a young America.

Explorers had long searched for a water route to the west. Throughout the eighteenth and nineteenth centuries, the lack of an efficient and safe transportation network kept populations and trade largely confined to coastal areas. At the beginning of the nineteenth century, the Allegheny Mountains were the Western Frontier. The Northwest Territories that would later become Illinois, Indiana, Michigan and Ohio were rich in timber, minerals, and fertile land for farming, but it took weeks to reach these things. Travellers were faced with rutted turnpike roads that baked to hardness in the summer sun. In the winter, the roads dissolved into mud.

An imprisoned flour merchant named Jesse Hawley envisioned a better way: a canal from Buffalo on the eastern shore of Lake Erie to Albany on the upper Hudson River, a distance of almost 400 miles. Long a proponent of efficient water transportation, Hawley had gone bankrupt trying to move his products to market. Hawley’s ideas caught the interest of Assemblyman Joshua Forman, who submitted the first state legislation related to the Erie Canal in 1808, calling for a series of surveys to be made examining the practicality of a water route between Lake Erie and the Hudson River. In 1810, Thomas Eddy, and State Senator Jonas Platt, hoping to get plans for the canal moving forward, approached influential Senator De Witt Clinton, former mayor of New York City, to enlist his support. Though Clinton had been recruited to the canal effort by Eddy and Platt, he quickly became one of the canal’s most active supporters and went on to successfully tie his very political fate to its success.

On April 15th, 1817, the New York State Legislature finally approved construction of the Erie Canal. The Legislature authorised $7 million for construction of the 363-mile long waterway, which was to be 40 feet wide and eighteen feet deep. Construction began on July 4th 1817 and took eight years.

Like most canals, the Erie Canal depended on a lock system in order to compensate for changes in water levels over distance. A lock is a section of canal or river that is closed off to control the water level, so that boats can be raised or lowered as they pass through it. Locks have two sets of sluice gates (top and bottom), which seal off and then open the entrances to the chamber, which is where a boat waits while the movement up or down takes place. In addition, locks also have valves at the bottom of the sluice gates and it is by opening these valves that water is allowed into and out of the chamber to raise or lower the water level, and hence the boat.

The effect of the Erie Canal was both immediate and dramatic, and settlers poured west.

The explosion of trade prophesied by Governor Clinton began, spurred by freight rates from Buffalo to New York of $10 per ton by canal, compared with $100 per ton by road. In 1829, there were 3,640 bushels of wheat transported down the canal from Buffalo. By 1837, this figure had increased to 500,000 bushels and, four years later, it reached one million. In nine years, canal tolls more than recouped the entire cost of construction. Within 15 years of the canal’s opening, New York was the busiest port in America, moving tonnages greater than Boston, Baltimore and New Orleans combined. Today, it can still be seen that every major city in New York State falls along the trade route established by the Erie Canal and nearly 80 per cent of upstate New York’s inhabitants live within 25 miles of the Erie Canal.

The completion of the Erie Canal spurred the first great westward movement of American settlers, gave access to the resources west of the Appalachians and made New York the preeminent commercial city in the United States. At one time, more than 50,000 people depended on the Erie Canal for their livelihood. From its inception, the Erie Canal helped form a whole new culture revolving around canal life. For those who travelled along the canal in packet boats or passenger vessels, the canal was an exciting place. Gambling and entertainment were frequent pastimes, and often families would meet each year at the same locations to share stories and adventures. Today, the canal has returned to its former glory and is filled with pleasure boats, fishermen, holidaymakers and cyclists riding the former towpaths where mules once trod. The excitement of the past is alive and well.

 

 

Sustainable architecture - lessons from the ant

 

Termite mounds were the inspiration for an innovative design in sustainable living

Africa owes its termite mounds a lot. Trees and shrubs take root in them. Prospectors mine them, looking for specks of gold carried up by termites from hundreds of metres below. And of course, they are a special treat to aardvarks and other insectivores.

Now, Africa is paying an offbeat tribute to these towers of mud. The extraordinary Eastgate Building in Harare, Zimbabwe’s capital city, is said to be the only one in the world to use the same cooling and heating principles as the termite mound.

Termites in Zimbabwe build gigantic mounds inside which they farm a fungus that is their primary food source. This must be kept at exactly 30.5°C, while the temperatures on the African yeld outside can range from 1.5°C at night only just above freezing to a baking hot 40°C during the day. The termites achieve this remarkable feat by building a system of vents in the mound. Those at the base lead down into chambers cooled by wet mud carried up from water tables far below, and others lead up through a Hue to the peak of the mound. By constantly opening and closing these heating and cooling vents over the course of the day the termites succeed in keeping the temperature constant in spite of the wide fluctuations outside.

Architect Mick Pearce used precisely the same strategy when designing the Eastgate Building, which has no air conditioning and virtually no heating. The building the country's largest commercial and shopping complex uses less than I0% of the energy of a conventional building ns size. These efficiencies translated directly to the bottom line: the Eastgate’s owners saved $3.5 million on a $36 million building because an air- conditioning plant didn't have to be imported. These savings were also passed on to tenants: rents are 20% lower than in a new building next door.

The complex is actually two buildings linked by bridges across a shady, glass-roofed atrium open to the breezes. Fans suck fresh air in from the atrium, blow it upstairs through hollow spaces under the floors and from there into each office through baseboard vents. As it rises and warms, it is drawn out via ceiling vents and finally exits through forty- eight brick chimneys.

To keep the harsh, high yeld sun from heating the interior, no more than 25% of the outside is glass, and all the windows are screened by cement arches that just out more than a metre.

During summer’s cool nights, big fans flush air through the building seven times an hour to chill the hollow floors. By day, smaller fans blow two changes of air an hour through the building, to circulate the air which has been in contact with the cool floors. For winter days, there are small heaters in the vents.

This is all possible only because Harare is 1600 feet above sea level, has cloudless skies, little humidity and rapid temperature swings days as warm as 3l°C commonly drop to 14°C at night. ‘You couldn’t do this in New York, with its fantastically hot summers and fantastically cold winters,’ Pearce said. But then his eyes lit up at the challenge.' Perhaps you could store the summer's heat in water somehow.

The engineering firm of Ove Amp & Partners, which worked with him on the design, monitors daily temperatures outside, under the floors and at knee, desk and ceiling level. Ove Arup's graphs show that the temperature of the building has generally stayed between 23"C and 25°C. with the exception of the annual hot spell just before the summer rains in October, and three days in November, when a janitor accidentally switched off the fans at night. The atrium, which funnels the winds through, can be much cooler. And the air is fresh far more so than in air-conditioned buildings, where up to 30% of the air is recycled.

Pearce, disdaining smooth glass skins as ‘igloos in the Sahara’, calls his building, with its exposed girders and pipes, ‘spiky’. The design of the entrances is based on the porcupine-quill headdresses of the local Shona tribe. Elevators are designed to look like the mineshaft cages used in Zimbabwe's diamond mines. The shape of the fan covers, and the stone used in their construction, are echoes of Great Zimbabwe, the ruins that give the country its name.

Standing on a roof catwalk, peering down inside at people as small as termites below. Pearce said he hoped plants would grow wild in the atrium and pigeons and bats would move into it. like that termite fungus, further extending the whole 'organic machine’ metaphor. The architecture, he says, is a regionalised style that responds to the biosphere, to the ancient traditional stone architecture of Zimbabwe's past, and to local human resources.

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

The Revolutionary Bridges of Robert Maillart

Swiss engineer Robert Maillart built some of the greatest bridges of the 20th century. His designs elegantly solved a basic engineering problem: how to support enormous weights using a slender arch.

Just as railway bridges were the great structural symbols of the 19th century, highway bridges became the engineering emblems of the 20th century. The invention of the automobile created an irresistible demand for paved roads and vehicular bridges throughout the developed world. The type of bridge needed for cars and trucks, however, is fundamentally different from that needed for locomotives. Most highway bridges carry lighter loads than railway bridges do, and their roadways can be sharply curved or steeply sloping. To meet these needs, many turn-of-the-century bridge designers began working with a new building material: reinforced concrete, which has steel bars embedded in it. And the master of this new material was Swiss structural engineer, Robert Maillart.

Early in his career, Maillart developed a unique method for designing bridges, buildings and other concrete structures. He rejected the complex mathematical analysis of loads and stresses that was being enthusiastically adopted by most of his contemporaries. At the same time, he also eschewed the decorative approach taken by many bridge builders of his time. He resisted imitating architectural styles and adding design elements solely for ornamentation. Maillart’s method was a form of creative intuition. He had a knack for conceiving new shapes to solve classic engineering problems] And because he worked in a highly competitive field, one of his goals was economy - he won design and construction contracts because his structures were reasonably priced, often less costly than all his rivals’ proposals.

Maillart’s first important bridge was built in the small Swiss town of Zuoz. The local officials had initially wanted a steel bridge to span the 30-metre wide Inn River, but Maillart argued that he could build a more elegant bridge made of reinforced concrete for about the same cost. His crucial innovation was incorporating the bridge’s arch and roadway into a form called the hollow-box arch, which would substantially reduce the bridge’s expense by minimising the amount of concrete needed. In a conventional arch bridge the weight of the roadway is transferred by columns to the arch, which must be relatively thick. In Maillart’s design, though, the roadway and arch were connected by three vertical walls, forming two hollow boxes running under the roadway (see diagram). The big advantage of this design was that because the arch would not have to bear the load alone, it could be much thinner - as little as one-third as thick as the arch in the conventional bridge.

His first masterpiece, however, was the 1905 Tavanasa Bridge over the Rhine river in the Swiss Alps. In this design, Maillart removed the parts of the vertical walls which were not essential because they carried no load. This produced a slender, lighter-looking form, which perfectly met the bridge’s structural requirements. But the Tavanasa Bridge gained little favourable publicity in Switzerland; on the contrary, it aroused strong aesthetic objections from public officials who were more comfortable with old-fashioned stone-faced bridges. Maillart, who had founded his own construction firm in 1902, was unable to win any more bridge projects, so he shifted his focus to designing buildings, water tanks and other structures made of reinforced concrete and did not resume his work on concrete bridges until the early 1920s.

His most important breakthrough during this period was the development of the deck-stiffened arch, the first example of which was the Flienglibach Bridge, built in 1923. An arch bridge is somewhat like an inverted cable. A cable curves downward when a weight is hung from it, an arch bridge curves upward to support the roadway and the compression in the arch balances the dead load of the traffic. For aesthetic reasons, Maillart wanted a thinner arch and his solution was to connect the arch to the roadway with transverse walls. In this way, Maillart justified making the arch as thin as he could reasonably build it. His analysis accurately predicted the behaviour of the bridge but the leading authorities of Swiss engineering would argue against his methods for the next quarter of a century.

Over the next 10 years, Maillart concentrated on refining the visual appearance of the deck-stiffened arch. His best-known structure is the Salginatobel Bridge, completed in 1930. He won the competition for the contract because his design was the least expensive of the 19 submitted - the bridge and road were built for only 700,000 Swiss francs, equivalent to some $3.5 million today. Salginatobel was also Maillart’s longest span, at 90 metres and it had the most dramatic setting of all his structures, vaulting 80 metres above the ravine of the Salgina brook. In 1991 it became the first concrete bridge to be designated an international historic landmark.

Before his death in 1940, Maillart completed other remarkable bridges and continued to refine his designs. However, architects often recognised the high quality of Maillart’s structures before his fellow engineers did and in 1947 the architectural section of the Museum of Modern Art in New York City devoted a major exhibition entirely to his works. In contrast, very few American structural engineers at that time had even heard of Maillart. In the following years, however, engineers realised that Maillart’s bridges were more than just aesthetically pleasing - they were technically unsurpassed. Maillart’s hollow-box arch became the dominant design form for medium and long- span concrete bridges in the US. In Switzerland, professors finally began to teach Maillart’s ideas, which then influenced a new generation of designers.

Measuring Organisational Performance

There is clear-cut evidence that, for a period of at least one year, supervision which increases the direct pressure for productivity can achieve significant increases in production. However, such short-term increases are obtained only at a substantial and serious cost to the organisation.

To what extent can a manager make an impressive earnings record over a short period of one to three years by exploiting the company’s investment in the human organisation in his plant or division? To what extent will the quality of his organisation suffer if he does so? The following is a description of an important study conducted by the Institute for Social Research designed to answer these questions.

The study covered 500 clerical employees in four parallel divisions. Each division was organised in exactly the same way, used the same technology, did exactly the same kind of work, and had employees of comparable aptitudes.

Productivity in all four of the divisions depended on the number of clerks involved. The work entailed the processing of accounts and generating of invoices. Although the volume of work was considerable, the nature of the business was such that it could only be processed as it came along. Consequently, the only way in which productivity could be increased was to change the size of the workgroup.

The four divisions were assigned to two experimental programmes on a random basis. Each programme was assigned at random a division that had been historically high in productivity and a division that had been below average in productivity. No attempt was made to place a division in the programme that would best fit its habitual methods of supervision used by the manager, assistant managers, supervisors and assistant supervisors.

The experiment at the clerical level lasted for one year. Beforehand, several months were devoted to planning, and there was also a training period of approximately six months. Productivity was measured continuously and computed weekly throughout the year. The attitudes of employees and supervisory staff towards their work were measured just before and after the period.

Turning now to the heart of the study, in two divisions an attempt was made to change the supervision so that the decision levels were pushed down and detailed supervision of the workers reduced. More general supervision of the clerks and their supervisors was introduced. In addition, the managers, assistant managers, supervisors and assistant supervisors of these two divisions were trained in group methods of leadership, which they endeavoured to use as much as their skill would permit during the experimental year. For easy reference, the experimental changes in these two divisions will be labelled the ‘participative programme!

In the other two divisions, by contrast, the programme called for modifying the supervision so as to increase the closeness of supervision and move the decision levels upwards. This will be labelled the ‘hierarchically controlled programme’. These changes were accomplished by a further extension of the scientific management approach. For example, one of the major changes made was to have the jobs timed and to have standard times computed. This showed that these divisions were overstaffed by about 30%. The general manager then ordered the managers of these two divisions to cut staff by 25%. This was done by transfers without replacing the persons who left; no one was to be dismissed.

Results of the Experiment

Changes in Productivity 

Figure 1 shows the changes in salary costs per unit of work, which reflect the change in productivity that occurred in the divisions. As will be observed, the hierarchically controlled programmes increased productivity by about 25%. This was a result of the direct orders from the general manager to reduce staff by that amount. Direct pressure produced a substantial increase in production.

A significant increase in productivity of 2O% was also achieved in the participative programme, but this was not as great an increase as in the hierarchically controlled programme. To bring about this improvement, the clerks themselves participated in the decision to reduce the size of the work group. (They were aware of course that productivity increases were sought by management in conducting these experiments.) Obviously, deciding to reduce the size of a work group by eliminating some of its members is probably one of the most difficult decisions for a work group to make. Yet the clerks made it. In fact, one division in the participative programme increased its productivity by about the same amount as each of the two divisions in the hierarchically controlled programme. The other participative division, which historically had been the poorest of all the divisions, did not do so well and increased productivity by only 15%.

Changes in Attitudes

Although both programmes had similar effects on productivity, they had significantly different results in other respects. The productivity increases in the hierarchically controlled programme were accompanied by shifts in an adverse direction in such factors as loyalty, attitudes, interest, and involvement in the work. But just the opposite was true in the participative programme.

For example, Figure 2 shows that when more general supervision and increased participation were provided, the employees’ feeling of responsibility to see that the work got done increased. Again, when the supervisor was away, they kept on working. In the hierarchically controlled programme, however, the feeling of responsibility decreased, and when the supervisor was absent, work tended to stop.

As Figure 3 shows, the employees in the participative programme at the end of the year felt that their manager and assistant manager were ‘closer to them’ than at the beginning of the year. The opposite was true in the hierarchical programme. Moreover, as Figure 4 shows, employees in the participative programme felt that their supervisors were more likely to ‘pull’ for them, or for the company and them, and not be solely interested in the company, while in the hierarchically controlled programme, the opposite trend occurred.

 

 

Population viability analysis

Part A

To make political decisions about the extent and type of forestry in a region it is important to understand the consequences of those decisions. One tool for assessing the impact of forestry on the ecosystem is population viability analysis (PVA). This is a tool for predicting the probability that a species will become extinct in a particular region over a specific period. It has been successfully used in the United States to provide input into resource exploitation decisions and assist wildlife managers and there is now enormous potential for using population viability to assist wildlife management in Australia’s forests.

A species becomes extinct when the last individual dies. This observation is a useful starting point for any discussion of extinction as it highlights the role of luck and chance in the extinction process. To make a prediction about extinction we need to understand the processes that can contribute to it and these fall into four broad categories which are discussed below.

Part B

A

Early attempts to predict population viability were based on demographic uncertainty Whether an individual survives from one year to the next will largely be a matter of chance. Some pairs may produce several young in a single year while others may produce none in that same year. Small populations will fluctuate enormously because of the random nature of birth and death and these chance fluctuations can cause species extinctions even if, on average, the population size should increase. Taking only this uncertainty of ability to reproduce into account, extinction is unlikely if the number of individuals in a population is above about 50 and the population is growing.

B

Small populations cannot avoid a certain amount of inbreeding. This is particularly true if there is a very small number of one sex. For example, if there are only 20 individuals of a species and only one is a male, all future individuals in the species must be descended from that one male. For most animal species such individuals are less likely to survive and reproduce. Inbreeding increases the chance of extinction.

C

Variation within a species is the raw material upon which natural selection acts. Without genetic variability a species lacks the capacity to evolve and cannot adapt to changes in its environment or to new predators and new diseases. The loss of genetic diversity associated with reductions in population size will contribute to the likelihood of extinction.

D

Recent research has shown that other factors need to be considered. Australia’s environment fluctuates enormously from year to year. These fluctuations add yet another degree of uncertainty to the survival of many species. Catastrophes such as fire, flood, drought or epidemic may reduce population sizes to a small fraction of their average level. When allowance is made for these two additional elements of uncertainty the population size necessary to be confident of persistence for a few hundred years may increase to several thousand.

Part C

Beside these processes we need to bear in mind the distribution of a population. A species that occurs in five isolated places each containing 20 individuals will not have the same probability of extinction as a species with a single population of 100 individuals in a single locality.

Where logging occurs (that is, the cutting down of forests for timber) forest- dependent creatures in that area will be forced to leave. Ground-dwelling herbivores may return within a decade. However, arboreal marsupials (that is animals which live in trees) may not recover to pre-logging densities for over a century. As more forests are logged, animal population sizes will be reduced further. Regardless of the theory or model that we choose, a reduction in population size decreases the genetic diversity of a population and increases the probability of extinction because of any or all of the processes listed above. It is therefore a scientific fact that increasing the area that is loaded in any region will increase the probability that forest-dependent animals will become extinct.

Dark soil

To find it, you have to go digging in rainforests, and to the untrained eye, it does not seem special at all - just a thick layer of dark earth that would not look out of place in many gardens. But these fertile, dark soils are in fact very special, because despite the lushness of tropical rainforests, the soils beneath them are usually very poor and thin. Even more surprising is where this dark soil comes from.

‘You might expect this precious fertile resource to be found in the deep jungle, far from human settlements or farmers,’ says James Fraser, who has been hunting for it in Africa’s rainforests. 'But I go looking for dark earth round the edge of villages and ancient towns, and in traditionally farmed areas. It’s usually there. And the older and larger the settlement, the more dark earth there is.’

Such findings are overturning some long-held ideas. Jungle farmers are usually blamed not just for cutting down trees but also for exhausting the soils. And yet the discovery of these rich soils - first in South America and now in Africa, too - suggest that, whether by chance or design, many people living in rainforests farmed in a way that enhanced rather than destroyed soils. In fact, it is becoming clear that part of what we think of as lush pure rainforest is actually long-abandoned farmland, enriched by the waste created by ancient humans.

 

 

NEW RULES FOR THE PAPER GAME

 Computerized data storage and electronic mail were to have heralded the paperless office. But, contrary to expectation, paper consumption throughout the world shows no sign of abating. In fact, consumption, especially of printing and writing papers, continues to increase. World demand for paper and board is now expected to grow faster than the general economic growth in the next 15 years. Strong demand will be underprinted by the growing industrialization of South East Asia, the reemergence of paper packaging, greater use of facsimile machines and photocopiers, and the popularity of direct -mail advertising. It is possible that by 2007, world paper and board demand will reach 455 million tonnes, compared with 241 million tonnes in 1991.

The pulp and paper industry has not been badly affected by the electronic technologies that promised a paperless society. But what has radically altered the industry’s structure is pressure from another front-a more environmentally conscious society driving an irreversible move towards cleaner industrial production. The environmental consequences of antiquated pulp mill practices and technologies had marked this industry as one in need of reform. Graphic descriptions of deformed fish and thinning populations, particularly in the Baltic Sea where old pulp mills had discharged untreated effluents for 100 years, have disturbed the international community.

Until the 1950s, it was common for pulp mills and other industries to discharge untreated effluent into rivers and seas. The environmental effects were at the time either not understood, or regarded as an acceptable cost of economic prosperity in an increasingly import-oriented world economy. But greater environmental awareness has spurred a fundamental change in attitude in the community, in government and in industry itself.

Since the early 1980s, most of the world-scale pulp mills in Scandinavia and North America have modernized their operations, outlying substantial amounts to improve production methods. Changes in mill design and processes have been aimed at minimizing the environmental effects of effluent discharge while at the same time producing pulp with the whiteness and strength demanded by the international market. The environmental impetus is taking this industry even further, with the focus now on developing processes that may even eliminate waste-water discharges. But the ghost of the old mills continues to haunt industry today. In Europe, companies face a flood of environment-related legislation. In Germany, companies are now being held responsible for the waste they create.

Pulp is the porridge -like mass of plant fibers from which paper is made. Paper makers choose the type of plant fibre and the processing methods, depending on what the end product will be used for: whether it is a sturdy packing box, a smooth sheet of writing paper or a fragile tissue. In wood, which is the source of about 90% of the world’s paper production, fibres are bound together by lignin, which gives the unbleached pulp a brown colour. Pulping can be done by mechanical grinding, or by chemical treatment in which woodchips are “cooked” with chemicals, or by a combination of both methods.

Kraft pulping is the most widely used chemical process for producing pulp with the strength required by the high-quality paper market. It is now usually carried out in a continuous process in a large vessel called digester. Woodchips are fed from a pile into the top of the digester. In the digester, the chips are cooked in a solution called white liquor, nosed of caustic soda (sodium hydroxide) sodium sulphide. The chips are cooked at high temperatures of up to 170 degree for up to three hours. The pulp is then washed and rate from the spent cooking liquor which has turned dark and is now appropriately ailed black liquor. An important feature of kraft pulping is a chemical recovery system which recycles about 95 % of the cooking chemicals and produces more than enough energy runs the mill. In a series of steps involving a furnace and tanks, some of the black liquor is transformed into energy, while some is regenerated into the original white cooking liquor. The pulp that comes out has little lignin left in the fibres. Bleaching removes the last remaining lignin and brightens the pulp. Most modern mills have modified their pulping process to remove as much of the lignin as possible before the pulp moves to the bleaching stage. 

 

 

HELIUM’S FUTURE UP IN THE AIR

A In recent years we have all been exposed to dire media reports concerning the impending demise of global coal and oil reserves, but the depletion of another key non¬renewable resource continues without receiving much press at all. Helium - an inert, odourless, monatomic element known to lay people as the substance that makes balloons float and voices squeak when inhaled - could be gone from this planet within a generation.

Helium itself is not rare; there is actually a plentiful supply of it in the cosmos. In fact, 24 per cent of our galaxy’s elemental mass consists of helium, which makes it the second most abundant element in our universe. Because of its lightness, however, most helium vanished from our own planet many years ago. Consequently, only a miniscule proportion - 0.00052%, to be exact - remains in earth’s atmosphere. Helium is the by¬ product of millennia of radioactive decay from the elements thorium and uranium. The helium is mostly trapped in subterranean natural gas bunkers and commercially extracted through a method known as fractional distillation.

The loss of helium on Earth would affect society greatly. Defying the perception of it as a novelty substance for parties and gimmicks, the element actually has many vital applications in society. Probably the most well known commercial usage is in
airships and blimps (non-flammable helium replaced hydrogen as the lifting gas du jour after the Hindenburg catastrophe in 1932, during which an airship burst into flames and crashed to the ground killing some passengers and crew). But helium is also instrumental in deep-sea diving, where it is blended with nitrogen to mitigate the dangers of inhaling ordinary air under high pressure; as a cleaning agent for rocket engines; and, in its most prevalent use, as a coolant for superconducting magnets in hospital MRI (magnetic resonance imaging) scanners.

The possibility of losing helium forever poses the threat of a real crisis because its unique qualities are extraordinarily difficult, if not impossible to duplicate (certainly, no biosynthetic ersatz product is close to approaching the point of feasibility for helium, even as similar developments continue apace for oil and coal). Helium is even cheerfully derided as a “loner” element since it does not adhere to other molecules like its cousin, hydrogen. According to Dr. Lee Sobotka, helium is the “most noble of gases, meaning it’s very stable and non-reactive for the most part ... it has a closed electronic configuration, a very tightly bound atom. It is this coveting of its own electrons that prevents combination with other elements’. Another important attribute is helium’s unique boiling point, which is lower than that for any other element. The worsening global shortage could render millions of dollars of high-value, life-saving equipment totally useless. The dwindling supplies have already resulted in the postponement of research and development projects in physics laboratories and manufacturing plants around the world. There is an enormous supply and demand imbalance partly brought about by the expansion of high-tech manufacturing in Asia.

The source of the problem is the Helium Privatisation Act (HPA), an American law passed in 1996 that requires the U.S. National Helium Reserve to liquidate its helium assets by 2015 regardless of the market price. Although intended to settle the original cost of the reserve by a U.S. Congress ignorant of its ramifications, the result of this fire sale is that global helium prices are so artificially deflated that few can be bothered recycling the substance or using it judiciously. Deflated values also mean that natural gas extractors see no reason to capture helium. Much is lost in the process of extraction. As Sobotka notes: "[t]he government had the good vision to store helium, and the question now is: Will the corporations have the vision to capture it when extracting natural gas, and consumers the wisdom to recycle? This takes long-term vision because present market forces are not sufficient to compel prudent practice”. For Nobel-prize laureate Robert Richardson, the U.S. government must be prevailed upon to repeal its privatisation policy as the country supplies over 80 per cent of global helium, mostly from the National Helium Reserve. For Richardson, a twenty- to fifty-fold increase in prices would provide incentives to recycle. A number of steps need to be taken in order to avert a costly predicament in the coming decades. Firstly, all existing supplies of helium ought to be conserved and released only by permit, with medical uses receiving precedence over other commercial or recreational demands. Secondly, conservation should be obligatory and enforced by a regulatory agency. At the moment some users, such as hospitals, tend to recycle diligently while others, such as NASA, squander massive amounts of helium. Lastly, research into alternatives to helium must begin in earnest.

 

 

The Problem of Scarce Resources

Section A

The problem of how health-care resources should be allocated or apportioned, so that they are distributed in both, the most just and most efficient way, is not a new one. Every health system in an economically developed society is faced with the need to decide (either formally or informally) what proportion of the community’s total resources should be spent on health-care; how resources are to be apportioned; what diseases and disabilities and which forms of treatment are to be given priority; which members of the community are to be given special consideration in respect of their health needs; and which forms of treatment are the most cost-effective.

Section B

What is new is that, from the 1950s onwards, there have been certain general changes in outlook about the finitude of resources as a whole and of health-care resources in particular, as well as more specific changes regarding the clientele of health-care resources and the cost to the community of those resources. Thus, in the 1950s and 1960s, there emerged an awareness in Western societies that resources for the provision of fossil fuel energy were finite and exhaustible and that the capacity of nature or the environment to sustain economic development and population was also finite. In other words, we became aware of the obvious fact that there were ‘limits to growth’. The new consciousness that there were also severe limits to health-care resources was part of this general revelation of the obvious. Looking back, it now seems quite incredible that in the national health systems that emerged in many countries in the years immediately after the 1939-45 World War, it was assumed without question that all the basic health needs of any community could be satisfied, at least in principle; the ‘invisible hand’ of economic progress would provide.

Section C

However, at exactly the same time as this new realisation of the finite character of health-care resources was sinking in, an awareness of a contrary kind was developing in Western societies: that people have a basic right to health-care as a necessary condition of a proper human life. Like education, political and legal processes and institutions, public order, communication, transport and money supply, health-care came to be seen as one of the fundamental social facilities necessary for people to exercise their other rights as autonomous human beings. People are not in a position to exer- | cise personal liberty and to be self-determining if they are poverty-stricken, or deprived of basic education, or do not live within a context of law and order. In the same way, basic health-care is a condition of the exercise of autonomy.

Section D

Although the language of ‘rights’ sometimes leads to confusion, by the late 1970s it was recognised in most societies that people have a right to health-care (though there has been considerable resistance in the United States to the idea that there is a formal right to health-care). It is also accepted that this right generates an obligation or duty for the state to ensure that adequate health-care resources are provided out of the public purse. The state has no obligation to provide a health-care system itself, but to ensure that such a system is provided. Put another way, basic health-care is now recognised as a ‘public good’, rather than a ‘private good’ that one is expected to buy for oneself. As the 1976 declaration of the World Health Organisation put it: ‘The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition.’ As has just been remarked, in a liberal society basic health is seen as one of the indispensable conditions for the exercise of personal autonomy.

Section E

Just at the time when it became obvious that health-care resources could not possibly meet the demands being made upon them, people were demanding that their fundamental right to health-care be satisfied by the state. The second set of more specific changes that have led to the present concern about the distribution of health-care resources stems from the dramatic rise in health costs in most OECD1 countries, accompanied by large-scale demographic and social changes which have meant, to take one example, that elderly people are now major (and relatively very expensive) consumers of health-care resources. Thus in OECD countries as a whole, health costs increased from 3.8% of GDP1 2 in 1960 to 7% of GDP in 1980, and it has been predicted that the proportion of health costs to GDP will continue to increase. (In the US the current figure is about 12% of GDP, and in Australia about 7.8% of GDP)

As a consequence, during the 1980s a kind of doomsday scenario (analogous to similar doomsday extrapolations about energy needs and fossil fuels or about population increases) was projected by health administrators, economists and politicians, i In this scenario, ever-rising health costs were matched against static or declining resources.

 


 

1Organisation for Economic Cooperation and Development

2Gross Domestic Product

 

 

Making Every Drop Count

The history of human civilisation is entwined with the history of the ways we have learned to manipulate water resources. As towns gradually expanded, water was brought from increasingly remote sources, leading to sophisticated engineering efforts such as dams and aqueducts. At the height of the Roman Empire, nine major systems, with an innovative layout of pipes and well-built sewers, supplied the occupants of Rome with as much water per person as is provided in many parts of the industrial world today.

During the industrial revolution and population explosion of the 19th and 20th centuries, the demand for water rose dramatically. Unprecedented construction of tens of thousands of monumental engineering projects designed to control floods, protect clean water supplies, and provide water for irrigation and hydropower brought great benefits to hundreds of millions of people. Food production has kept pace with soaring populations mainly because of the expansion of artificial irrigation systems that make possible the growth of 40 % of the world’s food. Nearly one fifth of all the electricity generated worldwide is produced by turbines spun by the power of falling water.

Yet there is a dark side to this picture: despite our progress, half of the world’s population still suffers, with water services inferior to those available to the ancient Greeks and Romans. As the United Nations report on access to water reiterated in November 2001, more than one billion people lack access to clean drinking water; some two and a half billion do not have adequate sanitation services. Preventable water-related diseases kill an estimated 10,000 to 20,000 children every day, and the latest evidence suggests that we are falling behind in efforts to solve these problems.

The consequences of our water policies extend beyond jeopardising human health. Tens of millions of people have been forced to move from their homes - often with little warning or compensation - to make way for the reservoirs behind dams. More than 20 % of all freshwater fish species are now threatened or endangered because dams and water withdrawals have destroyed the free-flowing river ecosystems where they thrive. Certain irrigation practices degrade soil quality and reduce agricultural productivity. Groundwater aquifers* are being pumped down faster than they are naturally replenished in parts of India, China, the USA and elsewhere. And disputes over shared water resources have led to violence and continue to raise local, national and even international tensions.

*underground stores of water

At the outset of the new millennium, however, the way resource planners think about water is beginning to change. The focus is slowly shifting back to the provision of basic human and environmental needs as top priority - ensuring ‘some for all,’ instead of ‘more for some’. Some water experts are now demanding that existing infrastructure be used in smarter ways rather than building new facilities, which is increasingly considered the option of last, not first, resort. This shift in philosophy has not been universally accepted, and it comes with strong opposition from some established water organisations. Nevertheless, it may be the only way to address successfully the pressing problems of providing everyone with clean water to drink, adequate water to grow food and a life free from preventable water-related illness.

Fortunately - and unexpectedly - the demand for water is not rising as rapidly as some predicted. As a result, the pressure to build new water infrastructures has diminished over the past two decades. Although population, industrial output and economic productivity have continued to soar in developed nations, the rate at which people withdraw water from aquifers, rivers and lakes has slowed. And in a few parts of the world, demand has actually fallen.

What explains this remarkable turn of events? Two factors: people have figured out how to use water more efficiently, and communities are rethinking their priorities for water use. Throughout the first three-quarters of the 20th century, the quantity of freshwater consumed per person doubled on average; in the USA, water withdrawals increased tenfold while the population quadrupled. But since 1980, the amount of water consumed per person has actually decreased, thanks to a range of new technologies that help to conserve water in homes and industry. In 1965, for instance, Japan used approximately 13 million gallons* of water to produce $1 million of commercial output; by 1989 this had dropped to 3.5 million gallons (even accounting for inflation) - almost a quadrupling of water productivity. In the USA, water withdrawals have fallen by more than 20 % from their peak in 1980.

On the other hand, dams, aqueducts and other kinds of infrastructure will still have to be built, particularly in developing countries where basic human needs have not been met. But such projects must be built to higher specifications and with more accountability to local people and their environment than in the past. And even in regions where new projects seem warranted, we must find ways to meet demands with fewer resources, respecting ecological criteria and to a smaller budget.

* 1 gallon: 4.546 litres

 

 

 

An assessment of micro-wind turbines

In terms of micro-renewable energy sources suitable for private use, a 15-kilowatt (kW) turbine is at the biggest end of the spectrum. With a nine metre diameter and a pole as high as a four-storey house, this is the most efficient form of wind micro­turbine, and the sort of thing you could install only if you had plenty of space and money. According to one estimate, a 15-kW micro-turbine (that’s one with the maximum output), costing £41,000 to purchase and a further £9,000 to install, is capable of delivering 25,000 kilowatt-hours (kWh)’ of electricity each year if placed on a suitably windy site.

I don’t know of any credible studies of the greenhouse gas emissions involved in producing and installing turbines, so my estimates here are going to be even more broad than usual. However, it is worth trying. If turbine manufacture is about as carbon intensive per pound sterling of product as other generators and electrical motors, which seems a reasonable assumption, the carbon intensity of manufacture will be around 640 kilograms (kg) per £1,000 of value. Installation is probably about as carbon intensive as typical construction, at around 380 kg per £1,000. That makes the carbon footprint (the total amount of greenhouse gases that installing a turbine creates) 30 tonnes.

The carbon savings from wind-powered electricity generation depend on the carbon intensity of the electricity that you’re replacing. Let’s assume that your generation replaces the coal-fuelled part of the country’s energy mix. In other words, if you live in the UK, let’s say that rather than replacing typical grid electricity, which comes from a mix of coal, gas, oil and renewable energy sources, the effect of your turbine is to reduce the use of coal-fired power stations. That’s reasonable, because coal is the least preferable source in the electricity mix. In this case the carbon saving is roughly one kilogram per kWh, so you save 25 tonnes per year and pay back the embodied carbon in just 14 months - a great start.

The UK government has recently introduced a subsidy for renewable energy that pays individual producers 24p per energy unit on top of all the money they save on their own fuel bill, and on selling surplus electricity back to the grid at approximately 5p per unit. With all this taken into account, individuals would get back £7,250 per year on their investment. That pays back the costs in about six years. It makes good financial sense and, for people who care about the carbon savings for their own sake, it looks like a fantastic move. The carbon investment pays back in just over a year, and every year after that is a 25-tonne carbon saving. (It’s important to remember that all these sums rely on a wind turbine having a favourable location)

So, at face value, the turbine looks like a great idea environmentally, and a fairly good long-term investment economically for the person installing it. However, there is a crucial perspective missing from the analysis so far. Has the government spent its money wisely? It has invested 24p per unit into each micro-turbine. That works out at a massive £250 per tonne of carbon saved. My calculations tell me that had the government invested its money in offshore wind farms, instead of subsidising smaller domestic turbines, they would have broken even after eight years. In other words, the micro-turbine works out as a good investment for individuals, but only because the government spends, and arguably wastes, so much money subsidising it. Carbon savings are far lower too.

Nevertheless, although the micro-wind turbine subsidy doesn’t look like the very best way of spending government resources on climate change mitigation, we are talking about investing only about 0.075 percent per year of the nation’s GDP to get a one percent reduction in carbon emissions, which is a worthwhile benefit. In other words, it could be much better, but it could be worse. In addition, such investment helps to promote and sustain developing technology.

There is one extra favourable way of looking at the micro-wind turbine, even if it is not the single best way of investing money in cutting carbon. Input- output modelling has told us that it is actually quite difficult to spend money without having a negative carbon impact. So if the subsidy encourages people to spend their money on a carbon-reducing technology such as a wind turbine, rather than on carbon-producing goods like cars, and services such as overseas holidays, then the reductions in emissions will be greater than my simple sums above have suggested. 

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

Deforestation in the 21st century

When it comes to cutting down trees, satellite data reveals a shift from the patterns of the past

Globally, roughly 13 million hectares of forest are destroyed each year. Such deforestation has long been driven by farmers desperate to earn a living or by loggers building new roads into pristine forest. But now new data appears to show that big, block clearings that reflect industrial deforestation have come to dominate, rather than these smaller-scale efforts that leave behind long, narrow swaths of cleared land. Geographer Ruth DeFries of Columbia University and her colleagues used satellite images to analyse tree-clearing in countries ringing the tropics, representing 98 per cent of all remaining tropical forest. Instead of the usual ‘fish bone' signature of deforestation from small-scale operations, large, chunky blocks of cleared land reveal a new motive for cutting down woods.

In fact, a statistical analysis of 41 countries showed that forest loss rates were most closely linked with urban population growth and agricultural exports in the early part of the 21st century - even overall population growth was not as strong an influence. ‘In previous decades, deforestation was associated with planned colonisation, resettlement schemes in local areas and farmers clearing land to grow food for subsistence,' DeFries says. ‘What we’re seeing now is a shift from small-scale farmers driving deforestation to distant demands from urban growth, agricultural trade and exports being more important drivers.’

In other words, the increasing urbanisation of the developing world, as populations leave rural areas to concentrate in booming cities, is driving deforestation, rather than containing it. Coupled with this there is an ongoing increase in consumption in the developed world of products that have an impact on forests, whether furniture, shoe leather or chicken feed. ‘One of the really striking characteristics of this century is urbanisation and rapid urban growth in the developing world,’ DeFries says, ‘People in cities need to eat.’ ‘There’s no surprise there,’ observes Scott Poynton, executive director of the Tropical Forest Trust, a Switzerland-based organisation that helps businesses implement and manage sustainable forestry in countries such as Brazil, Congo and Indonesia. ‘It’s not about people chopping down trees. It's all the people in New York, Europe and elsewhere who want cheap products, primarily food.’

Dearies argues that in order to help sustain this increasing urban and global demand, agricultural productivity will need to be increased on lands that have already been cleared. This means that better crop varieties or better management techniques will need to be used on the many degraded and abandoned lands in the tropics. And the Tropical Forest Trust is building management systems to keep illegally harvested wood from ending up in, for example, deck chairs, as well as expanding its efforts to look at how to reduce the ‘forest footprint’ of agricultural products such as palm oil. Poynton says, ‘The point is to give forests value as forests, to keep them as forests and give them a use as forests. They’re not going to be locked away as national parks. That’s not going to happen.’ 

But it is not all bad news. Halts in tropical deforestation have resulted in forest regrowth in some areas where tropical lands were previously cleared. And forest clearing in the Amazon, the world’s largest tropical forest, dropped from roughly 1.9 million hectares a year in the 1990s to 1.6 million hectares a year over the last decade, according to the Brazilian government. 'We know that deforestation has slowed down in at least the Brazilian Amazon,’ DeFries says. ‘Every place is different. Every country has its own particular situation, circumstances and driving forces.’ 

Regardless of this, deforestation continues, and cutting down forests is one of the largest sources of greenhouse gas emissions from human activity - a double blow that both eliminates a biological system to suck up C02 and creates a new source of greenhouse gases in the form of decaying plants. The United Nations Environment Programme estimates that slowing such deforestation could reduce some 50 billion metric tons of C02, or more than a year of global emissions. Indeed, international climate negotiations continue to attempt to set up a system to encourage this, known as the UN Development Programme’s fund for reducing emissions from deforestation and forest degradation in developing countries (REDD). If policies [like REDD] are to be effective, we need to understand what the driving forces are behind deforestation, DeFries argues. This is particularly important in the light of new pressures that are on the horizon: the need to reduce our dependence on fossil fuels and find alternative power sources, particularly for private cars, is forcing governments to make products such as biofuels more readily accessible. This will only exacerbate the pressures on tropical forests. 

But millions of hectares of pristine forest remain to protect, according to this new analysis from Columbia University. Approximately 60 percent of the remaining tropical forests are in countries or areas that currently have little agricultural trade or urban growth. The amount of forest area in places like central Africa, Guyana and Suriname, DeFries notes, is huge. ‘There’s a lot of forest that has not yet faced these pressures.’

 

 

Glaciers

Besides the earth’s oceans, glacier ice is the largest source of water on earth. A glacier is a massive stream or sheet of ice that moves underneath itself under the influence of gravity. Some glaciers travel down mountains or valleys, while others spread across a large expanse of land. Heavily glaciated regions such as Greenland and Antarctica are called continental glaciers. These two ice sheets encompass more than 95 percent of the earth’s glacial ice. The Greenland ice sheet is almost 10,000 feet thick in some areas, and the weight of this glacier is so heavy that much of the region has been depressed below sea level. Smaller glaciers that occur at higher elevations are called alpine or valley glaciers. Another way of classifying glaciers is in terms of their internal temperature. In temperate glaciers, the ice within the glacier is near its melting point. Polar glaciers, in contrast, always maintain temperatures far below melting.

The majority of the earth’s glaciers are located near the poles, though glaciers exist on all continents, including Africa and Oceania. The reason glaciers are generally formed in high alpine regions is that they require cold temperatures throughout the year. In these areas where there is little opportunity for summer ablation (loss of mass), snow changes to compacted fim and then crystallized ice. During periods in which melting and evaporation exceed the amount of snowfall, glaciers will retreat rather than progress. While glaciers rely heavily on snowfall, other climactic conditions including freezing rain, avalanches, and wind, contribute to their growth. One year of below average precipitation can stunt the growth of a glacier tremendously. With the rare exception of surging glaciers, a common glacier flows about 10 inches per day in the summer and 5 inches per day in the winter. The fastest glacial surge on record occurred in 1953, when the Kutiah Glacier in Pakistan grew more than 12 kilometers in three months.

The weight and pressure of ice accumulation causes glacier movement. Glaciers move out from under themselves, via plastic deformation and basal slippage. First, the internal flow of ice crystals begins to spread outward and downward from the thickened snow pack also known as the zone of accumulation. Next, the ice along the ground surface begins to slip in the same direction. Seasonal thawing at the base of the glacier helps to facilitate this slippage. The middle of a glacier moves faster than the sides and bottom because there is no rock to cause friction. The upper part of a glacier rides on the ice below. As a glacier moves it carves out a U-shaped valley similar to a riverbed, but with much steeper walls and a flatter bottom.

Besides the extraordinary rivers of ice, glacial erosion creates other unique physical features in the landscape such as horns, fjords, hanging valleys, and cirques. Most of these landforms do not become visible until after a glacier has receded. Many are created by moraines, which occur at the sides and front of a glacier. Moraines are formed when material is picked up along the way and deposited in a new location. When many alpine glaciers occur on the same mountain, these moraines can create a horn. The Matterhorn, in the Swiss Alps is one of the most famous horns. Fjords, which are very common in Norway, are coastal valleys that fill with ocean water during a glacial retreat. Hanging valleys occur when two or more glacial valleys intersect at varying elevations. It is common for waterfalls to connect the higher and lower hanging valleys, such as in Vosemite National Park. A cirque is a large bowlshaped valley that forms at the front of a glacier. Cirques often have a lip on their down slope that is deep enough to hold small lakes when the ice melts away.

Glacier movement and shape shifting typically occur over hundreds of years. While presently about 10 percent of the earth’s land is covered with glaciers, it is believed that during the last Ice Age glaciers covered approximately 32 percent of the earth’s surface. In the past century, most glaciers have been retreating rather than flowing forward. It is unknown whether this glacial activity is due to human impact or natural causes, but by studying glacier movement, and comparing climate and agricultural profiles over hundreds of years, glaciologists can begin to understand environmental issues such as global warming.

The history of jeans

The first people to wear jeans were sailors in the 16th century. Sailors were gone for a long time. They had to do hard work outside in bad weather. Often their clothes had holes in them, got thinner or lost color. They needed something strong against wear and tear. Their clothes had to last longer and stay in good condition. They found this type of cloth during their trip to India. It was made of thick cotton and was called dungaree. It was dyed indigo. Indians use the indigo plant to color this type of cloth in factories. Sailors bought dungaree cloth in outside markets, cut it and wore it on their trips home.

The first jeans were made in Genoa. Genoa is a city in Italy. In the 16th century, Genoa was very powerful. Its sailors traveled all around the world. The Genoa city decided to make better pants for their sailors. They used the dungaree cloth because it was sturdy and strong. The new pants were called 'geanos' or 'jeanos'. Sailors could use it in both wet and dry weather. They could roll up the pants when cleaning the ship. To clean the pants, they put them inside a net, threw it in the ocean and dragged the net behind the ship. This is when they realized their color changes to white. This is how bleached jeans were invented.

Later French workers in Nimes also made jeans. They used a different type of cloth called denim. But it was also sturdy and dyed blue, like the jeanos. In 1872, there was a small cloth merchant in Germany. His name was Levi Strauss. He bought and sold denim from France but Levi Strauss got into trouble and had to go away to America. In New York Levi learned how to sew. When he moved to San Francisco, he met many gold diggers. These men went to find gold in rivers. The weather was often bad and the men were only thin pants. Levi started to cut pants out of denim. He sold these jeans to the gold diggers, and they loved them. Soon all factory workers and farmers were wearing jeans too. They were comfortable and easy to take care of and cheap.

In 1950, popular movie and music stars like Elvis Presley and James Dean started wearing jeans. Those jeans were really tight and parents didn't like them. But they caught on with teenagers. Jeans became so popular because they meant freedom. Teenagers wanted to be independent and to make their own rules. In 1960 they started to decorate their jeans with flowers and colorful designs, or to tear and rip the jeans.

But in 1980 jeans became very expensive. Famous fashion designers like Calvin Klein began making designer jeans. They put their name on these jeans. Young people wanted to wear certain brands to show their style. There was a lot of pressure to keep up with the trend. Everybody wanted to be fashionable. Jeans were considered the uniform of youth. You had to wear jeans to be in style.

 

 

Snow-makers

Skiing is big business nowadays. But what can ski resort owners do if the snow doesn't come?

In the early to mid twentieth century, with the growing popularity of skiing, ski slopes became extremely profitable businesses. But ski resort owners were completely dependent on the weather: if it didn't snow, or didn’t snow enough, they had to close everything down. Fortunately, a device called the snow gun can now provide snow whenever it is needed. These days such machines are standard equipment in the vast majority of ski resorts around the world, making it possible for many resorts to stay open for months or more a year.

Snow formed by natural weather systems comes from water vapour in the atmosphere. The water vapour condenses into droplets, forming clouds. If the temperature is sufficiently low, the water droplets freeze into tiny ice crystals. More water particles then condense onto the crystal and join with it to form a snowflake. As the snow flake grows heavier, it falls towards the Earth.

The snow gun works very differently from a natural weather system, but it accomplishes exactly the same thing. The device basically works by combining water and air. Two different hoses are attached to the gun. one leading from a water pumping station which pumps water up from a lake or reservoir, and the other leading from an air compressor. When the compressed air passes through the hose into the gun. it atomises the water - that is, it disrupts the stream so that the water splits up into tiny droplets. The droplets are then blown out of the gun and if the outside temperature is below 0°C, ice crystals will form, and will then make snowflakes in the same way as natural snow.

Snow-makers often talk about dry snow and wet snow. Dry snow has a relatively low amount of water, so it is very light and powdery. This type of snow is excellent for skiing because skis glide over it easily without getting stuck in wet slush. One of the advantages of using a snow-maker is that this powdery snow can be produced to give the ski slopes a level surface. However, on slopes which receive heavy use, resort owners also use denser, wet snow underneath the dry snow. Many resorts build up the snow depth this way once or twice a year, and then regularly coat the trails with a layer of dry snow throughout the winter.

The wetness of snow is dependent on the temperature and humidity outside, as well as the size of the water droplets launched by the gun. Snow-makers have to adjust the proportions of water and air in their snow guns to get the perfect snow consistency for the outdoor weather conditions. Many ski slopes now do this with a central computer system that is connected to weather-reading stations all over the slope.

But man-made snow makes heavy demands on the environment. It takes about 275,000 litres of water to create a blanket of snow covering a 60x60 metre area. Most resorts pump water from one or more reservoirs located in low-lying areas. The run-off water from the slopes feeds back into these reservoirs, so the resort can actually use the same water over and over again. However, considerable amounts of energy are needed to run the large air-compressing pumps, and the diesel engines which run them also cause air pollution.

Because of the expense of making snow, ski resorts have to balance the cost of running the machines with the benefits of extending the ski season, making sure they only make snow when it is really needed and when it will bring the maximum amount of profit in return for the investment. But man-made snow has a number of other uses as well. A layer of snow keeps a lot of the Earth’s heat from escaping into the atmosphere, so farmers often use man-made snow to provide insulation for winter crops. Snow-making machines have played a big part in many movie productions. Movie producers often take several months to shoot scenes that cover just a few days. If the movie takes place in a snowy setting, the set decorators have to get the right amount of snow for each day of shooting either by adding man-made snow or melting natural snow. And another important application of man-made snow is its use in the tests that aircraft must undergo in order to ensure that they can function safely in extreme conditions.

 

 

BAKELITE - The birth of modem plastics

In 1907, Leo Hendrick Baekeland, a Belgian scientist working in New York, discovered and patented a revolutionary new synthetic material. His invention, which he named 'Bakelite', was of enormous technological importance, and effectively launched the modern plastics industry.

The term 'plastic' comes from the Greek plassein, meaning 'to mould' Some plastics are derived from natural sources, some are semi-synthetic (the result of chemical action on a natural substance), and some are entirely synthetic, that is, chemically engineered from the constituents of coal or oil. Some are 'thermoplastic', which means that, like candlewax, they melt when heated and can then be reshaped. Others are 'thermosetting': like eggs, they cannot revert to their original viscous state, and their shape is thus fixed for ever Bakelite had the distinction of being the first totally synthetic thermosetting plastic.

The history of today's plastics begins with the discovery of a series of semi-synthetic thermoplastic materials in the mid-nineteenth century. The impetus behind the development of these early plastics was generated by a number of factors - immense technological progress in the domain of chemistry, coupled with wider cultural changes, and the pragmatic need to find acceptable substitutes for dwindling supplies of 'luxury' materials such as tortoiseshell and ivory.

Baekeland's interest in plastics began in 1885 when, as a young chemistry student in Belgium, he embarked on research into phenolic resins, the group of sticky substances produced when phenol (carbolic acid) combines with an aldehyde (a volatile fluid similar to alcohol). He soon abandoned the subject, however, only returning to it some years later. 8y 1905 he was a wealthy New Yorker, having recently made his fortune with the invention of a new photographic paper. While Baekeland had been busily amassing dollars, some advances had been made in the development of plastics. The years 1899 and 1900 had seen the patenting of the first semi-synthetic thermosetting material that could be manufactured on an industrial scale. In purely scientific terms, Baekeland's major contribution to the field is not so much the actual discovery of the material to which he gave his name, but rather the method by which a reaction between phenol and formaldehyde could be controlled, thus making possible its preparation on a commercial basis. On 13 July 1907, Baekeland took out his famous patent describing this preparation, the essential features of which are still in use today.

The original patent outlined a three-stage process, in which phenol and formaldehyde (from wood or coal) were initially combined under vacuum inside a large egg-shaped kettle. The result was a resin known as Novalak, which became soluble and malleable when heated. The resin was allowed to cool in shallow trays until it hardened, and then broken up and ground into powder. Other substances were then introduced: including fillers, such as woodflour, asbestos or cotton, which increase strength and moisture resistance, catalysts (substances to speed up the reaction between two chemicals without joining to either) and hexa, a compound of ammonia and formaldehyde which supplied the additional formaldehyde necessary to form a thermosetting resin. This resin was then left to cool and harden, and ground up a second time. The resulting granular powder was raw Bakelite, ready to be made into a vast range of manufactured objects. In the last stage, the heated Bakelite was poured into a hollow mould of the required shape and subjected to extreme heat and pressure, thereby 'setting' its form for life.

The design of Bakelite objects, everything from earrings to television sets, was governed to a large extent by the technical requirements of the moulding process. The object could not be designed so that it was locked into the mould and therefore difficult to extract. A common general rule was that objects should taper towards the deepest part of the mould, and if necessary the product was moulded in separate pieces. Moulds had to be carefully designed so that the molten Bakelite would flow evenly and completely into the mould. Sharp corners proved impractical and were thus avoided, giving rise to the smooth, 'streamlined' style popular in the 1930s. The thickness of the walls of the mould was also crucial’ thick walls took longer to cool and harden, a factor which had to be considered by the designer in order to make the most efficient use of machines.

Baekeland's invention, although treated with disdain in its early years, went on to enjoy an unparalleled popularity which lasted throughout the first half of the twentieth century. It became the wonder product of the new world of industrial expansion - 'the material of a thousand uses'. Being both non-porous and heat-resistant, Bakelite kitchen goods were promoted as being germ-free and sterilisable. Electrical manufacturers seized on its insulating properties, and consumers everywhere relished its dazzling array of shades, delighted that they were now, at last, no longer restricted to the wood tones and drab browns of the preplastic era. It then fell from favour again during the 1950s, and was despised and destroyed in vast quantities. Recently, however, it has been experiencing something of a renaissance, with renewed demand for original Bakelite objects in the collectors' marketplace, and museums, societies and dedicated individuals once again appreciating the style and originality of this innovative material.

 

 

The story of silk

The history of the world’s most luxurious fabric, from ancient China to the present day

Silk is a fine, smooth material produced from the cocoons - soft protective shells - that are made by mulberry silkworms (insect larvae). Legend has it that it was Lei Tzu, wife of the Yellow Emperor, ruler of China in about 3000 BC, who discovered silkworms. One account of the story goes that as she was taking a walk in her husband’s gardens, she discovered that silkworms were responsible for the destruction of several mulberry trees. She collected a number of cocoons and sat down to have a rest. It just so happened that while she was sipping some tea, one of the cocoons that she had collected landed in the hot tea and started to unravel into a fine thread. Lei Tzu found that she could wind this thread around her fingers. Subsequently, she persuaded her husband to allow her to rear silkworms on a grove of mulberry trees. She also devised a special reel to draw the fibres from the cocoon into a single thread so that they would be strong enough to be woven into fabric. While it is unknown just how much of this is true, it is certainly known that silk cultivation has existed in China for several millennia.

Originally, silkworm farming was solely restricted to women, and it was they who were responsible for the growing, harvesting and weaving. Silk quickly grew into a symbol of status, and originally, only royalty were entitled to have clothes made of silk. The rules were gradually relaxed over the years until finally during the Qing Dynasty (1644—1911 AD), even peasants, the lowest caste, were also entitled to wear silk. Sometime during the Han Dynasty (206 BC-220 AD), silk was so prized that it was also used as a unit of currency. Government officials were paid their salary in silk, and farmers paid their taxes in grain and silk. Silk was also used as diplomatic gifts by the emperor. Fishing lines, bowstrings, musical instruments and paper were all made using silk. The earliest indication of silk paper being used was discovered in the tomb of a noble who is estimated to have died around 168 AD.

Demand for this exotic fabric eventually created the lucrative trade route now known as the Silk Road, taking silk westward and bringing gold, silver and wool to the East. It was named the Silk Road after its most precious commodity, which was considered to be worth more than gold. The Silk Road stretched over 6,000 kilometres from Eastern China to the Mediterranean Sea, following the Great Wall of China, climbing the Pamir mountain range, crossing modern-day Afghanistan and going on to the Middle East, with a major trading market in Damascus. From there, the merchandise was shipped across the Mediterranean Sea. Few merchants travelled the entire route; goods were handled mostly by a series of middlemen.

With the mulberry silkworm being native to China, the country was the world’s sole producer of silk for many hundreds of years. The secret of silk-making eventually reached the rest of the world via the Byzantine Empire, which ruled over the Mediterranean region of southern Europe, North Africa and the Middle East during the period 330—1453 AD. According to another legend, monks working for the Byzantine emperor Justinian smuggle silkworm eggs to Constantinople (Istanbul in modern-day Turkey) in 550 AD, concealed inside hollow bamboo walking canes. The Byzantines were as secretive as the Chinese, however, and for many centuries the weaving and trading of silk fabric was a strict imperial monopoly. Then in the seventh century, the Arabs conquered Persia, capturing their magnificent silks in the process.

Silk production thus spread through Africa, Sicily and Spain as the Arabs swept, through these lands. Andalusia in southern Spain was Europe’s main silk-producing centre in the tenth century. By the thirteenth century, however, Italy had become Europe’s leader in silk production and export. Venetian merchants traded extensively in silk and encouraged silk growers to settle in Italy. Even now, silk processed in the province of Como in northern Italy enjoys an esteemed reputation.

The nineteenth century and industrialisation saw the downfall of the European silk industry. Cheaper Japanese silk, trade in which was greatly facilitated by the opening of the Suez Canal, was one of the many factors driving the trend. Then in the twentieth century, new manmade fibres, such as nylon, started to be used in what had traditionally been silk products, such as stockings and parachutes. The two world wars, which interrupted the supply of raw material from Japan, also stifled the European silk industry. After the Second World War, Japan’s silk production was restored, with improved production and quality of raw silk. Japan was to remain the world’s biggest producer of raw silk, and practically the only major exporter of raw silk, until the 1970s. However, in more recent decades, China has gradually recaptured its position as the world’s biggest producer and exporter of raw silk and silk yarn. Today, around 125,000 metric tons of silk are produced in the world, and almost two thirds of that production takes place in China.

 

 

Green virtues of green sand

Revolution in gloss recycling could help keep water clean
 

For the past 100 years special high grade white sand dug from the ground at Leighton Buzzard in the UK. has been used to filter tap water to remove bacteria and impurities but this may no longer be necessary. A new factory that turns used wine bottles into green sand could revolutionise the recycling industry and help to filter Britain’s drinking water. Backed by $1.6m from the European Union and the Department for Environment, Food and Rural Affairs (Defra), a company based in Scotland is building the factory, which will turn beverage bottles back into the sand from which they were made in the first place. The green sand has already been successfully tested by water companies and is being used in 50 swimming pools in Scotland to keep the water clean.

The idea is not only to avoid using up an increasingly scarce natural resource, sand but also to solve a crisis in the recycling industry. Britain uses 5.5m tonnes of glass a year, but recycles only 750,000 tonnes of it. The problem is that half the green bottle glass in Britain is originally from imported wine and beer bottles. Because there is so much of it, and it is used less in domestic production than other types, green glass is worth only $25 a tonne. Clear glass, which is melted down and used for whisky bottles, mainly for export, is worth double that amount.

Howard Drvden. a scientist and managing director of the company. Drvden Aqua, of Bonnyrigg, near Edinburgh, has spent six years working on the product he calls Active Filtration Media, or AFM. He concedes that he has given what is basically recycled glass a ‘fancy name' to remove the stigma of what most people would regard as an inferior product. He says he needs bottles that have already contained drinkable liquids to be sure that drinking water filtered through the AFM would not be contaminated. Crushed down beverage glass has fewer impurities than real sand and it performed better in trials. *The fact is that tests show that AFM does the job better than sand, it is easier to clean and reuse and has all sorts of properties that make it ideal for other applications.' he claimed.

The factory is designed to produce 100 tonnes of AFM a day, although Mr Dryden regards this as a large-scale pilot project rather than full production. Current estimates of the UK market for this glass for filtering drinking water, sewage, industrial water, swimming pools and fish farming are between 175.000 to 217.000 tonnes a year, which w ill use up most of the glass available near the factory. So he intends to build five or six factories in cities where there are large quantities of bottles, in order to cut down on transport costs.

The current factory will be completed this month and is expected to go into full production on January 14th next year. Once it is providing a ‘regular’ product, the government’s drinking water inspectorate will be asked to perform tests and approve it for widespread use by water companies. A Defra spokesman said it was hoped that AFM could meet approval within six months. The only problem that they could foresee was possible contamination if some glass came from sources other than beverage bottles.

Among those who have tested the glass already is Caroline Fitzpatrick of the civil and environmental engineering department of University College London. ‘We have looked at a number of batches and it appears to do the job.' she said. ‘Basically, sand is made of glass and Mr Dryden is turning bottles back into sand. It seems a straightforward idea and there is no reason we can think of why it would not work. Since glass from wine bottles and other beverages has no impurities and clearly did not leach any substances into the contents of the bottles, there was no reason to believe there would be a problem,’ Dr Fitzpatrick added.

Mr Dryden has set up a network of agents round the world to sell AFM. It is already in use in central America to filter water on banana plantations where the fruit has to he washed before being despatched to European markets. It is also in use in sewage works to filter water before it is returned to rivers, something which is becoming legally necessary across the European Union because of tighter regulations on sewage works. So there are a great number of applications involving cleaning up water. Currently, however, AFM costs $670 a tonne, about four times as much as good quality sand. ‘Hut that is because we haven't got large-scale production. Obviously, when we get going it will cost a lot less, and be competitive with sand in price as well.’ Mr Dryden said. ‘I believe it performs better and lasts longer than sand, so it is going to be better value too.'

If AFM takes off as a product it will be a big boost for the government agency which is charged with finding a market for recycled products. Crushed glass is already being used in road surfacing and in making tiles and bricks. Similarly. AFM could prove to have a widespread use and give green glass a cash value.

 

The meaning of volunteering

Volunteering, as some people consider mistakenly is a plethora of people from all walk of life as well as activities, but data from the other side of the world suggest otherwise. For example, a survey on who participated in volunteering by the Office for National Statistics (ONS) in the United Kingdom (UK) showed that people in higher income households are more likely than others to volunteer. In England and Wales, 57% of adults with gross annual household incomes of £75.000 or more, have volunteered formally In the 12 months prior to the survey date. They were almost twice more likely to have done so than those living in households with as annual income under £10.000.

As well as having high household incomes, volunteers also tend to have higher academic qualifications, be in higher socio-economic groups and be in employment. Among people with a degree or postgraduate qualification, 79 per cent had volunteered informally and 57 per cent had volunteered formally in the previous 12 months. For people with no qualifications the corresponding proportions were 52 per cent and 23 per cent at all. However, voluntary work is certainly not the exclusive preserve of the rich. Does the answer not lie perhaps in the fact that the rich tend to have money to allow them the time to be become involved in voluntary work compared to less well-off people?

A breakdown in the year 2000 of the range of volunteering activities taken from The Australia Bureau of Statistics gives an idea of the scale of activities in which people are typically involved. Eleven sectors are given ranging from Community and Welfare, which accounted for just over a quarter of the total hours volunteered in Australia, to Law/ justice/ politics with 1.2 percent at the other and of the scale. Other fields included sport/ recreation, religious activities and education, following at 21/1 per cent, 16.9 and 14.3 per cent of the total hours. The data here also seem to point to a cohort of volunteers with expertise and experience.

The knock-on effect of volunteering on the lives of individuals can be profound. Voluntary work helps foster independence and imparts the ability to deal with different situations, often simultaneously, thus teaching people how to work their way through different systems. It therefore brings people into touch with the real world; and, hence, equips them for the future.

Initially, young adults in their late teens might not seem to have the expertise or knowledge to impart to others that say a teacher or agriculturalist or nurse would have, but they do have many skills that can help others. And in the absence of any particular talent, their energy and enthusiasm can be harnessed for the benefit of their fellow human beings, and ultimately themselves. From all this, the gain to any community no matter how many volunteers are involved is immeasurable.

Employers will generally look favorably on people who have shown an ability to work as part of a team. It demonstrates a willingness to learn and an independent spirit, which would be desirable qualities in any employee. So to satisfy employers’ demands for experience when applying for work, volunteering can act as a means of gaining experience that might otherwise elude would-be workers and can ultimately lead to paid employment and the desired field.

But what are the prerequisites for becoming a volunteer? One might immediately think of attributes like kindness, selflessness, strength of character, ability to deal with others, determination, adaptability and flexibility and a capacity to comprehend the ways of other people. While offering oneself selflessly, working as a volunteer makes further demands on the individual. It requires a strength of will, a sense of moral responsibility for one’s fellow human beings, and an ability to fit into the ethos of an organization. But it also requires something which in no way detracts from valuable work done by volunteers and which may seem at first glance both contradictory and surprising: self interest.

Organizations involved in any voluntary work have to be realistic about this. If someone, whatever the age is going to volunteer and devote their time without money, they do need to receive something from it for themselves. People who are unemployed can use volunteer work as a stepping-stone to employment or as a means of finding out whether they really like the field the plan to enter or as a way to help them find themselves.

It is tempting to use some form of community work as an alternative to national service or as punishment for petty criminals by making the latter for example clean up parks, wash away graffiti, work with victims of their own or other people. Thus may be acceptable, but it does not constitute volunteer work, two cardinal rules of which are the willingness to volunteer without coercion and working unpaid.    .

FINDING THE LOST FREEDOM

The private car is assumed to have widened our horizons and increased our mobility. When we consider our children’s mobility, they can be driven to more places (and more distant places) than they could visit without access to a motor vehicle. However, allowing our cities to be dominated by cars has progressively eroded children’s independent mobility. Children have lost much of their freedom to explore their own neighborhood or city without adult supervision. In recent surveys, when parents in some cities were asked about their own childhood experiences, the majority remembered having more, or far more, opportunities for going out on their own, compared with their own children today. They had more freedom to explore their own environment.

Children’s independent access to their local streets may be important for their own personal, mental and psychological development. Allowing them to get to know their own neighborhood and community gives them a "sense of place”. This depends on "active exploration”, which is not provided for when children are passengers in cars. (Such children may see more, but they learn less.) Not only is it important that children be able to get to local play areas by themselves, but walking and cycling journeys to school and to other destinations provide genuine play activities in themselves.

They are very significant time and money costs for parents associated with transporting their children to school, sport and other locations. Research in the United Kingdom estimated that this cost, in 1990, was between 10 billion and 20 million pounds. (AIPPG)

The reduction in children’s freedom may also contribute to a weakening of the sense of local community. As fewer children and adults use the streets as pedestrians, these streets become less sociable places. There is less opportunity for children and adults to have the spontaneous of community. This in itself may exacerbate fears associated with assault and molestation of children, because there are fewer adults available who know their neighbors’ children, and who can look out for their safety.

The extra traffic involved in transporting children results in increased traffic congestion, pollution and accident risk. As our roads become more dangerous, more parents drive their children to more places, thus contributing to increased levels of danger for the remaining pedestrians. Anyone who has experienced either the reduced volume of traffic in peak hour during school holidays, or the traffic jams near schools at the end of a school day, will not need convincing about these points. Thus, there are also important environmental implications of children’s loss of freedom.

As individuals, parents strive to provide the best upbringing they can for their children. However, in doing so, (e.g. by driving their children to sport, school or recreation) parents may be contributing to a more dangerous environment for children generally. The idea that “streets are for cars and back yards and playgrounds are for children” is a strongly held belief, and parents have little choice as individuals but to keep their children off the streets if they want to protect their safety.

In many parts of Dutch cities, and some traffic calmed precincts in Germany, residential streets are now places where cars must give way to pedestrians. In these areas, residents are accepting the view that the function of streets is not solely to provide mobility for cars. Streets may also be for social interaction, walking, cycling and playing. One of the most important aspects of these European streets, in terms of giving cities back to children, has been a range of “traffic calming” initiatives, aimed at reducing the volume and speed of traffic. These initiatives have had complex interactive effects, leading to a sense that children have been able to do this in safety. Recent research has demonstrated that children in many German cities have significantly higher levels of freedom to travel to places in their own neighborhood or city than children in other cities in the world.

Modifying cities in order to enhance children’s freedom will not only benefit children. Such cities will become more environmentally sustainable, as well as more sociable and more livable for all city residents. Perhaps, it will be our concern for our children’s welfare that convinces us that we need to challenge the dominance of the car in our cities.

 

 

Gifted children and learning 

Internationally, ‘giftedness’ is most frequently determined by a score on a general intelligence test, known as an IQ test, which is above a chosen cutoff point, usually at around the top 2-5%. Children’s educational environment contributes to the IQ score and the way intelligence is used. For example, a very close positive relationship was found when children’s IQ scores were compared with their home educational provision (Freeman, 2010). The higher the children’s IQ scores, especially over IQ 130, the better the quality of their educational backup, measured in terms of reported verbal interactions with parents, number of books and activities in their home etc. Because IQ tests are decidedly influenced by what the child has learned, they are to some extent measures of current achievement based on age-norms; that is, how well the children have learned to manipulate their knowledge and know-how within the terms of the test. The vocabulary aspect, for example, is dependent on having heard those words. But IQ tests can neither identify the processes of learning and thinking nor predict creativity.

Excellence does not emerge without appropriate help. To reach an exceptionally high standard in any area very able children need the means to learn, which includes material to work with and focused challenging tuition -and the encouragement to follow their dream. There appears to be a qualitative difference in the way the intellectually highly able think, compared with more average-ability or older pupils, for whom external regulation by the teacher often compensates for lack of internal regulation. To be at their most effective in their self-regulation, all children can be helped to identify their own ways of learning – metacognition – which will include strategies of planning, monitoring, evaluation, and choice of what to learn. Emotional awareness is also part of metacognition, so children should be helped to be aware of their feelings around the area to be learned, feelings of curiosity or confidence, for example.

High achievers have been found to use self-regulatory learning strategies more often and more effectively than lower achievers, and are better able to transfer these strategies to deal with unfamiliar tasks. This happens to such a high degree in some children that they appear to be demonstrating talent in particular areas. Overviewing research on the thinking process of highly able
children, (Shore and Kanevsky, 1993) put the instructor’s problem succinctly: ‘If they [the gifted] merely think more quickly, then .we need only teach more quickly. If they merely make fewer errors, then we can shorten the practice’. But of course, this is not entirely the case; adjustments have to be made in methods of learning and teaching, to take account of the many ways individuals think.

Yet in order to learn by themselves, the gifted do need some support from their teachers. Conversely, teachers who have a tendency to ‘overdirect’ can diminish their gifted pupils’ learning autonomy. Although ‘spoon-feeding’ can produce extremely high examination results, these are not always followed by equally impressive life successes. Too much dependence on the teachers risks loss of autonomy and motivation to discover. However, when teachers o pupils to reflect on their own learning and thinking activities, they increase their pupils’ self-regulation. For a young child, it may be just the simple question ‘What have you learned today?’ which helps them to recognise what they are doing. Given that a fundamental goal of education is to transfer the control of learning from teachers to pupils, improving pupils’ learning to learn techniques should be a major outcome of the school experience, especially for the highly competent. There are quite a number of new methods which can help, such as child- initiated learning, ability-peer tutoring, etc. Such practices have been found to be particularly useful for bright children from deprived areas.

But scientific progress is not all theoretical, knowledge is a so vital to outstanding performance: individuals who know a great deal about a specific domain will achieve at a higher level than those who do not (Elshout, 1995). Research with creative scientists by Simonton (1988) brought him to the conclusion that above a certain high level, characteristics such as independence seemed to contribute more to reaching the highest levels of expertise than intellectual skills, due to the great demands of effort and time needed for learning and practice. Creativity in all forms can be seen as expertise se mixed with a high level of motivation (Weisberg, 1993).

To sum up, learning is affected by emotions of both the individual and significant others. Positive emotions facilitate the creative aspects of earning and negative emotions inhibit it. Fear, for example, can limit the development of curiosity, which is a strong force in scientific advance, because it motivates problem-solving behaviour. In Boekaerts’ (1991) review of emotion the learning of very high IQ and highly achieving children, she found emotional forces in harness. They were not only curious, but often had a strong desire to control their environment, improve their learning efficiency and increase their own learning resources.

 

 

Bilingualism in Children

One misguided legacy of over a hundred years of writing on bilingualism1 is that children’s . intelligence will suffer if they are bilingual. Some of the earliest research into bilingualism examined whether bilingual children were ahead or behind monolingual2 children on IQ tests. From the 1920s through to the 1960s, the tendency was to find monolingual children ahead of bilinguals on IQ tests. The conclusion was that bilingual children were mentally confused. Having two languages in the brain, it was said, disrupted effective thinking. It was argued that having one well-developed language was superior to having two half-developed languages.

The idea that bilinguals may have a lower IQ still exists among many people, particularly monolinguals. However, we now know that this early research was misconceived and incorrect. First, such research often gave bilinguals an IQ test in their weaker language – usually English. Had bilinguals been tested in Welsh or Spanish or Hebrew, a different result may have been found. The testing of bilinguals was thus unfair. Second, like was not compared with like. Bilinguals tended to come from, for example, impoverished New York or rural Welsh backgrounds. The monolinguals tended to come from more middle class, urban families. Working class bilinguals were often compared with middle class monolinguals. So the results were more likely to be due to social class differences than language differences. The comparison of monolinguals and bilinguals was unfair.

The most recent research from Canada, the United States and Wales suggests that bilinguals are, at least, equal to monolinguals on IQ tests. When bilinguals have two well- developed languages (in the research literature called balanced bilinguals), bilinguals tend to show a slight superiority in IQ tests compared with monolinguals. This is the received psychological wisdom of the moment and is good news for raising bilingual children. Take, for example, a child who can operate in either language in the curriculum in the school. That child is likely to be ahead on IQ tests compared with similar (same gender, social class and age) monolinguals. Far from making people mentally confused, bilingualism is now associated with a mild degree of intellectual superiority.

One note of caution needs to be sounded. IQ tests probably do not measure intelligence. IQ tests measure a small sample of the broadest concept of intelligence. IQ tests are simply paper and pencil tests where only ’right and wrong ’answers are allowed. Is all intelligence summed up in such right and wrong, pencil and paper tests? Isn’t there a wider variety of intelligences that are important in everyday functioning and everyday life?

Many questions need answering. Do wc only define an intelligent person as somebody who obtains a high score on an IQ test? Are the only intelligent people those who belong to high IQ organisations such as MENSA? Is there social intelligence, musical intelligence, military intelligence, marketing intelligence, motoring intelligence, political intelligence? Are all, or indeed any, of these forms of intelligence measured by a simple pencil and paper IQ test which demands a single, acceptable, correct solution to each question? Defining what constitutes intelligent behaviour requires a personal value judgement as to what type of behaviour, and what kind of person is of more worth.

The current state of psychological wisdom about bilingual children is that, where two languages are relatively well developed, bilinguals have thinking advantages over monolinguals.Take an example. A child is asked a simple question: How many uses can you think offer a brick? Some children give two or three answers only. They can think of building walls, building a house and perhaps that is all. Another child scribbles away, pouring out ideas one after the other: blocking up a rabbit hole, breaking a window, using as a bird bath, as a plumb line, as an abstract sculpture in an art exhibition.

Research across different continents of the world shows that bilinguals tend to be more fluent, flexible, original and elaborate in their answers to this type of open-ended question. The person who can think of a few answers tends to be termed a convergent thinker.They converge onto a few acceptable conventional answers. People who think of lots of different uses for unusual items (e.g. a brick, tin can, cardboard box) are called divergers. Divergers like a variety of answers to a question and are imaginative and fluent in their thinking.

There are other dimensions in thinking where approximately ’balanced’ bilinguals may have temporary and occasionally permanent advantages over monolinguals: increased sensitivity to communication, a slightly speedier movement through the stages of cognitive development, and being less fixed on the sounds of words and more centred on the meaning of words. Such ability to move away from the sound of words and fix on the meaning of words tends to be a (temporary) advantage for bilinguals around the ages four to six This advantage may mean an initial head start in learning to read and learning to think about language.

1 bilingualism: the ability to speak two languages

2 monolingual: using or speaking only one language

Vote For Women

 

The suffragette movement, which campaigned for votes for women in the early twentieth century, is most commonly associated with the Pankhurst family and militant acts of varying degrees of violence. The Museum of London has drawn on its archive collection to convey a fresh picture with its exhibition

The Purple, White and Green: Suffragettes in London 1906-14.

The name is a reference to the colour scheme that the Women’s Social and Political Union (WSPU) created to give the movement a uniform, nationwide image. By doing so, it became one of the first groups to project a corporate identity, and it is this advanced marketing strategy, along with the other organisational and commercial achievements of the WSPU, to which the exhibition is devoted.

Formed in 1903 by the political campaigner Mrs Emmeline Pankhurst and her daughters Christabel and Sylvia, the WSPU began an educated campaign to put women’s suffrage on the political agenda. New Zealand, Australia and parts of the United States had already enfranchised women, and growing numbers of their British counterparts wanted the same opportunity.

With their slogan ‘Deeds not words’, and the introduction of the colour scheme, the WSPU soon brought the movement the cohesion and focus it had previously lacked.

Membership grew rapidly as women deserted the many other, less directed, groups and joined it. By 1906 the WSPU headquarters, called the Women’s Press Shop, had been established in Charing Cross Road and in spite of limited communications (no radio or television, and minimal use of the telephone) the message had spread around the country, with members and branch officers stretching to as far away as Scotland.

The newspapers produced by the WSPU, first Votes for Women and later The Suffragette, played a vital role in this communication. Both were sold throughout the country and proved an invaluable way of informing members of meetings, marches, fund-raising events and the latest news and views on the movement.

Equally importantly for a rising political group, the newspaper returned a profit. This was partly because advertising space was bought in the paper by large department stores such as Selfridges, and jewellers such as Mappin & Webb. These two, together with other like- minded commercial enterprises sympathetic to the cause, had quickly identified a direct way to reach a huge market of women, many with money to spend.

The creation of the colour scheme provided another money-making opportunity which the WSPU was quick to exploit. The group began to sell playing cards, board games, Christmas and greeting cards, and countless other goods, all in the purple, white and green colours. In 1906 such merchandising of a corporate identity was a new marketing concept.

But the paper and merchandising activities alone did not provide sufficient funds for the WSPU to meet organisational costs, so numerous other fund-raising activities combined to fill the coffers of the ‘war chest’. The most notable of these was the Woman’s Exhibition, which took place in 1909 in a Knightsbridge ice-skating rink, and in 10 days raised the equivalent of £250,000 today.

The Museum of London’s exhibition is largely visual, with a huge number of items on show. Against a quiet background hum of street sounds, copies of The Suffragette, campaign banners and photographs are all on display, together with one of Mrs Pankhurst’s shoes and a number of purple, white and green trinkets.

Photographs depict vivid scenes of a suffragette’s life: WSPU members on a self- proclaimed ‘monster’ march, wearing their official uniforms of a white frock decorated with purple, white and green accessories; women selling The Suffragette at street corners, or chalking up pavements with details of a forthcoming meeting.

Windows display postcards and greeting cards designed by women artists for the movement, and the quality of the artwork indicates the wealth of resources the WSPU could call on from its talented members.

Visitors can watch a short film made up of old newsreels and cinema material which clearly reveals the political mood of the day towards the suffragettes. The programme begins with a short film devised by the ‘antis’ - those opposed to women having the vote -depicting a suffragette as a fierce harridan bullying her poor, abused husband.

Original newsreel footage shows the suffragette Emily Wilding Davison throwing herself under King George V’s horse at a famous race-

Although the exhibition officially charts the years 1906 to 1914, graphic display boards outlining the bills of enfranchisement of 1918 and 1928, which gave the adult female populace of Britain the vote, show what was achieved. It demonstrates how advanced the suffragettes were in their thinking, in the marketing of their campaign, and in their work as shrewd and skilful image-builders. It also conveys a sense of the energy and ability the suffragettes brought to their fight for freedom and equality. And it illustrates the intelligence employed by women who were at that time deemed by several politicians to have ‘brains too small to know how to vote’.

 

The Development of Museums

 

The conviction that historical relics provide infallible testimony about the past is rooted in the nineteenth and early twentieth centuries, when science was regarded as objective and value free. As one writer observes: 'Although it is now evident that artefacts are as easily altered as chronicles, public faith in their veracity endures: a tangible relic seems ipso facto real.' Such conviction was, until recently, reflected in museum displays. Museums used to look - and some still do - much like storage rooms of objects packed together in showcases: good for scholars who wanted to study the subtle differences in design, but not for the ordinary visitor, to whom it all looked alike. Similarly, the information accompanying the objects often made little sense to the lay visitor. The content and format of explanations dated back to a time when the museum was the exclusive domain of the scientific researcher.

Recently, however, attitudes towards history and the way it should be presented have altered. The key word in heritage display is now 'experience', the more exciting the better and, if possible, involving all the senses. Good examples of this approach in the UK are the Jorvik Centre in York; the National Museum of Photography, Film and Television in Bradford; and the Imperial War Museum in London. In the US the trend emerged much earlier: Williamsburg has been a prototype for many heritage developments in other parts of the world. No one can predict where the process will end. On so-called heritage sites the re-enactment of historical events is increasingly popular, and computers will soon provide virtual reality experiences, which will present visitors with a vivid image of the period of their choice, in which they themselves can act as if part of the historical environment. Such developments have been criticised as an intolerable vulgarisation, but the success of many historical theme parks and similar locations suggests that the majority of the public does not share this opinion.

In a related development, the sharp distinction between museum and heritage sites on the one hand, and theme parks on the other, is gradually evaporating. They already borrow ideas and concepts from one another. For example, museums have adopted story lines for exhibitions, sites have accepted 'theming'as a relevant tool, and theme parks are moving towards more authenticity and research-based presentations. In zoos, animals are no longer kept in cages, but in great spaces, either in the open air or in enormous greenhouses, such as the jungle and desert environments in Burgers'Zoo in Holland. This particular trend is regarded as one of the major developments in the presentation of natural history in the twentieth century.

Theme parks are undergoing other changes, too, as they try to present more serious social and cultural issues, and move away from fantasy. This development is a response to market forces and, although museums and heritage sites have a special, rather distinct, role to fulfil, they are also operating in a very competitive environment, where visitors make choices on how and where to spend their free time. Heritage and museum experts do not have to invent stories and recreate historical environments to attract their visitors: their assets are already in place. However, exhibits must be both based on artefacts and facts as we know them, and attractively presented. Those who are professionally engaged in the art of interpreting history are thus in a difficult position, as they must steer a narrow course between the demands of 'evidence' and 'attractiveness', especially given the increasing need in the heritage industry for income-generating activities.

It could be claimed that in order to make everything in heritage more 'real', historical accuracy must be increasingly altered. For example, Pithecanthropus erectus is depicted in an Indonesian museum with Malay facial features, because this corresponds to public perceptions. Similarly, in the Museum of Natural History in Washington, Neanderthal man is shown making a dominant gesture to his wife. Such presentations tell us more about contemporary perceptions of the world than about our ancestors. There is one compensation, however, for the professionals who make these interpretations: if they did not provide the interpretation, visitors would do it for themselves, based on their own ideas, misconceptions and prejudices. And no matter how exciting the result, it would contain a lot more bias than the presentations provided by experts.

Human bias is inevitable, but another source of bias in the representation of history has to do with the transitory nature of the materials themselves. The simple fact is that not everything from history survives the historical process. Castles, palaces and cathedrals have a longer lifespan than the dwellings of ordinary people. The same applies to the furnishings and other contents of the premises. In a town like Leyden in Holland, which in the seventeenth century was occupied by approximately the same number of inhabitants as today, people lived within the walled town, an area more than five times smaller than modern Leyden. In most of the houses several families lived together in circumstances beyond our imagination. Yet in museums, fine period rooms give only an image of the lifestyle of the upper class of that era. No wonder that people who stroll around exhibitions are filled with nostalgia; the evidence in museums indicates that life was so much better in the past. This notion is induced by the bias in its representation in museums and heritage centres.

 

Museums of fine art and their public

The fact that people go to the Louvre museum in Paris to see the original painting Mona Lisa when they can see a reproduction anywhere leads us to question some assumptions about the role of museums of fine art in today’s world 

One of the most famous works of art in the world is Leonardo da Vinci’s Mona Lisa. Nearly everyone who goes to see the original will already be familiar with it from reproductions, but they accept that fine art is more rewardingly viewed in its original form.

However, if Mona Lisa was a famous novel, few people would bother to go to a museum to read the writer’s actual manuscript rather than a printed reproduction. This might be explained by the fact that the novel has evolved precisely because of technological developments that made it possible to print out huge numbers of texts, whereas oil paintings have always been produced as unique objects. In addition, it could be argued that the practice of interpreting or ‘reading’ each medium follows different conventions. With novels, the reader attends mainly to the meaning of words rather than the way they are printed on the page, whereas the ‘reader’ of a painting must attend just as closely to the material form of marks and shapes in the picture as to any ideas they may signify.

Yet it has always been possible to make very accurate facsimiles of pretty well any fine art work. The seven surviving versions of Mona Lisa bear witness to the fact that in the 16th century, artists seemed perfectly content to assign the reproduction of their creations to their workshop apprentices as regular ‘bread and butter’ work. And today the task of reproducing pictures is incomparably more simple and reliable, with reprographic techniques that allow the production of high-quality prints made exactly to the original scale, with faithful colour values, and even with duplication of the surface relief of the painting.

But despite an implicit recognition that the spread of good reproductions can be culturally valuable, museums continue to promote the special status of original work.

Unfortunately, this seems to place severe limitations on the kind of experience offered to visitors.

One limitation is related to the way the museum presents its exhibits. As repositories of unique historical objects, art museums are often called ‘treasure houses’. We are reminded of this even before we view a collection by the presence of security guards, attendants, ropes and display cases to keep us away from the exhibits. In many cases, the architectural style of the building further reinforces that notion. In addition, a major collection like that of London’s National Gallery is housed in numerous rooms, each with dozens of works, any one of which is likely to be worth more than all the average visitor possesses. In a society that judges the personal status of the individual so much by their material worth, it is therefore difficult not to be impressed by one’s own relative ‘worthlessness’ in such an environment.

Furthermore, consideration of the ‘value’ of the original work in its treasure house setting impresses upon the viewer that, since these works were originally produced, they have been assigned a huge monetary value by some person or institution more powerful than themselves. Evidently, nothing the viewer thinks about the work is going to alter that value, and so today’s viewer is deterred from trying to extend that spontaneous, immediate, self-reliant kind of reading which would originally have met the work.

The visitor may then be struck by the strangeness of seeing such diverse paintings, drawings and sculptures brought together in an environment for which they were not originally created. This ‘displacement effect’ is further heightened by the sheer volume of exhibits. In the case of a major collection, there are probably more works on display than we could realistically view in weeks or even months.

This is particularly distressing because time seems to be a vital factor in the appreciation of all art forms. A fundamental difference between paintings and other art forms is that there is no prescribed time over which a painting is viewed. By contrast, the audience encourage an opera or a play over a specific time, which is the duration of the performance. Similarly novels and poems are read in a prescribed temporal sequence, whereas a picture has no clear place at which to start viewing, or at which to finish. Thus art works themselves encourage us to view them superficially, without appreciating the richness of detail and labour that is involved.

Consequently, the dominant critical approach becomes that of the art historian, a specialised academic approach devoted to ‘discovering the meaning’ of art within the cultural context of its time. This is in perfect harmony with the museum s function, since the approach is dedicated to seeking out and conserving ‘authentic’, original, readings of the exhibits. Again, this seems to put paid to that spontaneous, participators criticism which can be found in abundance in criticism of classic works of literature, but is absent from most art history.

The displays of art museums serve as a warning of what critical practices can emerge when spontaneous criticism is suppressed. The museum public, like any other audience, experience art more rewardingly when given the confidence to express their views. If appropriate works of fine art could be rendered permanently accessible to the public by means of high-fidelity reproductions, as literature and music already are, the public may feel somewhat less in awe of them. Unfortunately, that may be too much to ask from those who seek to maintain and control the art establishment.

MAKING TIME FOR SCIENCE

Chronobiology might sound a little futuristic - like something from a science fiction novel, perhaps - but it’s actually a field of study that concerns one of the oldest processes life on this planet has ever known: short-term rhythms of time and their effect on flora and fauna.

This can take many forms. Marine life, for example, is influenced by tidal patterns. Animals tend to be active or inactive depending on the position of the sun or moon. Numerous creatures, humans included, are largely diurnal - that is, they like to come out during the hours of sunlight. Nocturnal animals, such as bats and possums, prefer to forage by night. A third group are known as crepuscular: they thrive in the low- light of dawn and dusk and remain inactive at other hours.

When it comes to humans, chronobiologists are interested in what is known as the circadian rhythm. This is the complete cycle our bodies are naturally geared to undergo within the passage of a twenty-four hour day. Aside from sleeping at night and waking during the day, each cycle involves many other factors such as changes in blood pressure and body temperature. Not everyone has an identical circadian rhythm. ‘Night people’, for example, often describe how they find it very hard to operate during the morning, but become alert and focused by evening. This is a benign variation within circadian rhythms known as a chronotype.

Scientists have limited abilities to create durable modifications of chronobiological demands. Recent therapeutic developments for humans such as artificial light machines and melatonin administration can reset our circadian rhythms, for example, but our bodies can tell the difference and health suffers when we breach these natural rhythms for extended periods of time. Plants appear no more malleable in this respect; studies demonstrate that vegetables grown in season and ripened on the tree are far higher in essential nutrients than those grown in greenhouses and ripened by laser.

Knowledge of chronobiological patterns can have many pragmatic implications for our day-to-day lives. While contemporary living can sometimes appear to subjugate biology - after all, who needs circadian rhythms when we have caffeine pills, energy drinks, shift work and cities that never sleep? - keeping in synch with our body clock is important.

The average urban resident, for example, rouses at the eye-blearing time of 6.04 a.m., which researchers believe to be far too early. One study found that even rising at 7.00 a.m. has deleterious effects on health unless exercise is performed for 30 minutes afterward. The optimum moment has been whittled down to 7.22 a.m.; muscle aches, headaches and moodiness were reported to be lowest by participants in the study who awoke then.

Once you’re up and ready to go, what then? If you’re trying to shed some extra pounds, dieticians are adamant: never skip breakfast. This disorients your circadian rhythm and puts your body in starvation mode. The recommended course of action is to follow an intense workout with a carbohydrate-rich breakfast; the other way round and weight loss results are not as pronounced.

Morning is also great for breaking out the vitamins. Supplement absorption by the body is not temporal-dependent, but naturopath Pam Stone notes that the extra boost at breakfast helps us get energised for the day ahead. For improved absorption, Stone suggests pairing supplements with a food in which they are soluble and steering clear of caffeinated beverages. Finally, Stone warns to take care with storage; high potency is best for absorption, and warmth and humidity are known to deplete the potency of a supplement.

After-dinner espressos are becoming more of a tradition - we have the Italians to thank for that - but to prepare for a good night’s sleep we are better off putting the brakes on caffeine consumption as early as 3 p.m. With a seven hour half-life, a cup of coffee containing 90 mg of caffeine taken at this hour could still leave 45 mg of caffeine in your nervous system at ten o’clock that evening. It is essential that, by the time you are ready to sleep, your body is rid of all traces.

Evenings are important for winding down before sleep; however, dietician Geraldine Georgeou warns that an after-five carbohydrate-fast is more cultural myth than chronobiological demand. This will deprive your body of vital energy needs. Overloading your gut could lead to indigestion, though. Our digestive tracts do not shut down for the night entirely, but their work slows to a crawl as our bodies prepare for sleep. Consuming a modest snack should be entirely sufficient.

In Praise of Amateurs

Despite the specialization of scientific research, amateurs still have an important role to play.

During the scientific revolution of the 17th century, scientists were largely men of private means who pursued their interest in natural philosophy for their own edification. Only in the past century or two has it become possible to make a living from investigating the workings of nature. Modern science was, in other words, built on the work of amateurs. Today, science is an increasingly specialized and compartmentalized subject, the domain of experts who know more and more about less and less. Perhaps surprisingly, however, amateurs – even those without private means – are still important.

A recent poll carried out at a meeting of the American Association for the Advancement of Science by astronomer Dr Richard Fienberg found that, in addition to his field of astronomy, amateurs are actively involved in such field as acoustics, horticulture, ornithology, meteorology, hydrology and palaeontology. Far from being crackpots, amateur scientists are often in close touch with professionals, some of whom rely heavily on their co-operation.

Admittedly, some fields are more open to amateurs than others. Anything that requires expensive equipment is clearly a no-go area. And some kinds of research can be dangerous; most amateur chemists, jokes Dr Fienberg, are either locked up or have blown themselves to bits. But amateurs can make valuable contributions in fields from rocketry to palaeontology and the rise of the Internet has made it easier than before to collect data and distribute results.

Exactly which field of study has benefited most from the contributions of amateurs is a matter of some dispute. Dr Fienberg makes a strong case for astronomy. There is, he points out, a long tradition of collaboration between amateur and professional sky watchers. Numerous comets, asteroids and even the planet Uranus were discovered by amateurs. Today, in addition to comet and asteroid spotting, amateurs continue to do valuable work observing the brightness of variable stars and detecting novae- ‘new’ stars in the Milky Way and supernovae in other galaxies. Amateur observers are helpful, says Dr Fienberg, because there are so many of them (they far outnumber professionals) and because they are distributed all over the world. This makes special kinds of observations possible:’ if several observers around the world accurately record the time when a star is eclipsed by an asteroid, for example, it is possible to derive useful information about the asteroid’s shape.

Another field in which amateurs have traditionally played an important role is palaeontology. Adrian Hunt, a palaeontologist at Mesa Technical College in New Mexico, insists that his is the field in which amateurs have made the biggest contribution. Despite the development of high-tech equipment, he says, the best sensors for finding fossils are human eyes – lots of them.

Finding volunteers to look for fossils is not difficult, he says, because of the near universal interest in anything to do with dinosaurs. As well as helping with this research, volunteers learn about science, a process he calls ‘recreational education’.

Rick Bonney of the Cornell Laboratory of Ornithology in Ithaca, New York, contends that amateurs have contributed the most in his field. There are, he notes, thought to be as many as 60 million birdwatchers in America alone. Given their huge numbers and the wide geographical coverage they provide, Mr Bonney has enlisted thousands of amateurs in a number of research projects. Over the past few years their observations have uncovered previously unknown trends and cycles in bird migrations and revealed declines in the breeding populations of several species of migratory birds, prompting a habitat conservation programme.

Despite the successes and whatever the field of study, collaboration between amateurs and professionals is not without its difficulties. Not everyone, for example is happy with the term ‘amateur’. Mr Bonney has coined the term ‘citizen scientist’ because he felt that other words, such as ‘volunteer’ sounded disparaging. A more serious problem is the question of how professionals can best acknowledge the contributions made by amateurs. Dr Fienberg says that some amateur astronomers are happy to provide their observations but grumble about not being reimbursed for out-of-pocket expenses. Others feel let down when their observations are used in scientific papers, but they are not listed as co-authors. Dr Hunt says some amateur palaeontologists are disappointed when told that they cannot take finds home with them.

These are legitimate concerns but none seems insurmountable. Provided amateurs and professionals agree the terms on which they will work together beforehand, there is no reason why co-operation between the two groups should not flourish. Last year Dr S. Carlson, founder of the Society for Amateur Scientists won an award worth $290,000 for his work in promoting such co-operation. He says that one of the main benefits of the prize is the endorsement it has given to the contributions of amateur scientists, which has done much to silence critics among those professionals who believe science should remain their exclusive preserve.

At the moment, says Dr Carlson, the society is involved in several schemes including an innovative rocket-design project and the setting up of a network of observers who will search for evidence of a link between low- frequency radiation and earthquakes. The amateurs, he says, provide enthusiasm and talent, while the professionals provide guidance ‘so that anything they do discover will be taken seriously’. Having laid the foundations of science, amateurs will have much to contribute to its ever – expanding edifice.

 

 

Acquiring the principles of mathematics and science

It has been pointed out that learning mathematics and science is not so much learning facts as learning ways of thinking. It has also been emphasised that in order to learn science, people often have to change the way they think in ordinary situations. For example, in order to understand even simple concepts such as heat and temperature, ways of thinking of temperature as a measure of heat must be abandoned and a distinction between ‘temperature’ and ‘heat’ must be learned. These changes in ways of thinking are often referred to as conceptual changes. But how do conceptual changes happen? How do young people change their ways of thinking as they develop and as they learn in school?

Traditional instruction based on telling students how modern scientists think does not seem to be very successful. Students may learn the definitions, the formulae, the terminology, and yet still maintain their previous conceptions. This difficulty has been illustrated many times, for example, when instructed students are interviewed about heat and temperature. It is often identified by teachers as a difficulty in applying the concepts learned in the classroom; students may be able to repeat a formula but fail to use the concept represented by the formula when they explain observed events.

The psychologist Piaget suggested an interesting hypothesis relating to the process of cognitive change in children. Cognitive change was expected to result from the pupils’ own intellectual activity. When confronted with a result that challenges their thinking - that is, when faced with conflict - pupils realise that they need to think again about their own ways of solving problems, regardless of whether the problem is one in mathematics or in science. He hypothesised that conflict brings about disequilibrium, and then triggers equilibration processes that ultimately produce cognitive change. For this reason, according to Piaget and his colleagues, in order for pupils to progress in their thinking they need to be actively engaged in solving problems that will challenge their current mode of reasoning. However, Piaget also pointed out that young children do not always discard their ideas in the face of contradictory evidence. They may actually discard the evidence and keep their theory.

Piaget’s hypothesis about how cognitive change occurs was later translated into an educational approach which is now termed ‘discovery learning’. Discovery learning initially took what is now considered the Tone learner’ route. The role of the teacher was to select situations that challenged the pupils’ reasoning; and the pupils’ peers had no real role in this process. However, it was subsequently proposed that interpersonal conflict, especially with peers, might play an important role in promoting cognitive change. This hypothesis, originally advanced by Perret-Clermont (1980) and Doise and Mugny (1984), has been investigated in many recent studies of science teaching and learning.

Christine Howe and her colleagues, for example, have compared children’s progress in understanding several types of science concepts when they are given the opportunity to observe relevant events. In one study, Howe compared the progress of 8 to 12-year-old children in understanding what influences motion down a slope. In order to ascertain the role of conflict in group work, they created two kinds of groups according to a pre-test: one in which the children had dissimilar views, and a second in which the children had similar views.

They found support for the idea that children in the groups with dissimilar views progressed more after their training sessions than those who had been placed in groups with similar views. However, they found no evidence to support the idea that the children worked out their new conceptions during their group discussions, because progress was not actually observed in a post-test immediately after the sessions of group work, but rather in a second test given around four weeks after the group work.

In another study, Howe set out to investigate whether the progress obtained through pair work could be a function of the exchange of ideas. They investigated the progress made by 12-15-year-old pupils in understanding the path of falling objects, a topic that usually involves conceptual difficulties. In order to create pairs of pupils with varying levels of dissimilarity in their initial conceptions, the pupils’ predictions and explanations of the path of falling objects were assessed before they were engaged in pair work. The work sessions involved solving computer-presented problems, again about predicting and explaining the paths of falling objects. A post-test, given to individuals, assessed the progress made by pupils in their conceptions of what influenced the path of falling objects.

 

 

Adventures in mathematical reasoning

 

Occasionally, in some difficult musical compositions, there are beautiful, but easy parts - parts so simple a beginner could play them. So it is with mathematics as well. There are some discoveries in advanced mathematics that do not depend on specialized knowledge, not even on algebra, geometry, or trigonometry. Instead they may involve, at most, a little arithmetic, such as ‘the sum of two odd numbers is even’, and common sense. Each of the eight chapters in this book illustrates this phenomenon. Anyone can understand every step in the reasoning. The thinking in each chapter uses at most only elementary arithmetic, and sometimes not even that. Thus all readers will have the chance to participate in a mathematical experience, to appreciate the beauty of mathematics, and to become familiar with its logical, yet intuitive, style of thinking.

One of my purposes in writing this book is to give readers who haven’t had the opportunity to see and enjoy real mathematics the chance to appreciate the mathematical way of thinking. I want to reveal not only some of the fascinating discoveries, but, more importantly, the reasoning behind them. In that respect, this book differs from most books on mathematics written for the general public. Some present the lives of colorful mathematicians. Others describe important applications of mathematics. Yet others go into mathematical procedures, but assume that the reader is adept in using algebra.

I hope this book will help bridge that notorious gap that separates the two cultures: the humanities and the sciences, or should I say the right brain (intuitive) and the left brain (analytical, numerical). As the chapters will illustrate, mathematics is not restricted to the analytical and numerical; intuition plays a significant role. The alleged gap can be narrowed or completely overcome by anyone, in part because each of us is far from using the full capacity of either side of the brain. To illustrate our human potential, I cite a structural engineer who is an artist, an electrical engineer who is an opera singer, an opera singer who published mathematical research, and a mathematician who publishes short stories.

Other scientists have written books to explain their fields to non-scientists, but have necessarily had to omit the mathematics, although it provides the foundation of their theories. The reader must remain a tantalized spectator rather than an involved participant, since the appropriate language for describing the details in much of science is mathematics, whether the subject is expanding universe, subatomic particles, or chromosomes. Though the broad.outline of a scientific theory can be sketched intuitively, when a part of the physical universe is finally understood, its description often looks like a page in a mathematics text.

Still, the non-mathematical reader can go far in understanding mathematical reasoning. This book presents the details that illustrate the mathematical style of thinking, which involves sustained, step-by-step analysis, experiments, and insights. You will turn these pages much more slowly than when reading a novel or a newspaper. It may help to have a pencil and paper ready to check claims and carry out experiments.

As I wrote, I kept in mind two types of readers: those who enjoyed mathematics until they were turned off by an unpleasant episode, usually around fifth grade, and mathematics aficionados, who will find much that is new throughout the book. This book also serves readers who simply want to sharpen their analytical skills. Many careers, such as law and medicine, require extended, precise analysis. Each chapter offers practice in following a sustained and closely argued line of thought. That mathematics can develop this skill is shown by these two testimonials:

A physician wrote, The discipline of analytical thought processes [in mathematics] prepared me extremely well for medical school. In medicine one is faced with a problem which must be thoroughly analyzed before a solution can be found. The process is similar to doing mathematics.’

A lawyer made the same point, “Although I had no background in law - not even one political science course — I did well at one of the best law schools. I attribute much of my success there to having learned, through the study of mathematics, and, in particular, theorems, how to analyze complicated principles. Lawyers who have studied mathematics can master the legal principles in a way that most others cannot.’

I hope you will share my delight in watching as simple, even naive, questions lead to remarkable solutions and purely theoretical discoveries find unanticipated applications.

 

Air pollution

Part One

Air pollution is increasingly becoming the focus of government and citizen concern around the globe. From Mexico City and New York, to Singapore and Tokyo, new solutions to this old problem are being proposed, Mailed and implemented with ever increasing speed. It is feared that unless pollution reduction measures are able to keep pace with the continued pressures of urban growth, air quality in many of the world’s major cities will deteriorate beyond reason.

Action is being taken along several fronts: through new legislation, improved enforcement and innovative technology. In Los Angeles, state regulations are forcing manufacturers to try to sell ever cleaner cars: their first of the cleanest, titled "Zero Emission Vehicles’, hove to be available soon, since they are intended to make up 2 per cent of sales in 1997. Local authorities in London are campaigning to be allowed to enforce anti-pollution lows themselves; at present only the police have the power to do so, but they tend to be busy elsewhere. In Singapore, renting out toad space to users is the way of the future.

When Britain’s Royal Automobile Club monitored the exhausts of 60,000 vehicles, it found that 12 per cent of them produced more than half the total pollution. Older cars were the worst offenders; though a sizeable number of quire new cars were also identified as gross polluters, they were simply badly tuned. California has developed a scheme to get these gross polluters off the streets: they offer a flat $700 for any old, run-down vehicle driven in by its owner. The aim is to remove the heaviest-polluting, most decrepit vehicles from the roads.

As part of a European Union environmental programme, a London council is resting an infra-red spectrometer from the University of Denver in Colorado. It gauges the pollution from a passing vehicle - more useful than the annual stationary rest that is the British standard today - by bouncing a beam through the exhaust and measuring what gets blocked. The councils next step may be to link the system to a computerised video camera able to read number plates automatically.

The effort to clean up cars may do little to cut pollution if nothing is done about the tendency to drive them more. Los Angeles has some of the world’s cleanest cars - far better than those of Europe - but the total number of miles those cars drive continues to grow. One solution is car-pooling, an arrangement in which a number of people who share the same destination share the use of one car. However, the average number of people in o car on the freeway in Los Angeles, which is 1.0, has been falling steadily. Increasing it would be an effective way of reducing emissions as well as easing congestion. The trouble is, Los Angeles  seem to like being alone in their cars.

Singapore has for a while had o scheme that forces drivers to buy a badge if they wish to visit a certain part of the city. Electronic innovations make possible increasing sophistication: rates can vary according to road conditions, time of day and so on. Singapore is advancing in this direction, with a city-wide network of transmitters to collect information and charge drivers as they pass certain points. Such road-pricing, however, can be controversial. When the local government in Cambridge, England, considered introducing Singaporean techniques, it faced vocal and ultimately successful opposition.

The scope of the problem facing the world’s cities is immense. In 1992, the United Nations Environmental Programme and the World Health Organisation (WHO) concluded that all of a sample of twenty megacities - places likely to have more than ten million inhabitants in the year 2000 - already exceeded the level the WHO deems healthy in at least one major pollutant. Two-thirds of them exceeded the guidelines for two, seven for three or more.

Of the six pollutants monitored by the WHO - carbon dioxide, nitrogen dioxide, ozone, sulphur dioxide, lead and particulate matter - it is this last category that is attracting the most attention from health researchers. PM10, a sub-category of particulate matter measuring ten-millionths of a metre across, has been implicated in thousands of deaths a year in Britain alone. Research being conducted in two counties of Southern California is reaching similarly disturbing conclusions concerning this little- understood pollutant.

A world-wide rise in allergies, particularly asthma, over the past four decades is now said to be linked with increased air pollution. The lungs and brains of children who grow up in polluted air offer further evidence of its destructive power The old and ill, however, are the most vulnerable to the acute effects of heavily polluted stagnant air. It con actually hasten death, os it did in December 1991 when a cloud of exhaust fumes lingered over the city of London for over a week.

The United Nations has estimated that in the year 2000 there will be twenty-four mega-cities and a further eighty-five cities of more than three million people. The pressure on public officials, corporations and urban citizens to reverse established trends in air pollution is likely to grow in proportion with the growth of cities themselves. Progress is being made. The question, though, remains the same: ‘Will change happen quickly enough?’

 

 

The Garbage Problem

Garbage is a big problem all over the world. People buy and use a lot of things nowadays. After a while, they throw them away in the garbage bin. All the garbage is later thrown away or dumped outside the city. These places are called landfill sites. In many cities, landfill sites are now full.

About one-third of all the garbage is made of paper. Another third of the garbage is a mix of glass, metal, plastic, and wood. The final third comes from food scraps. These are remains of food that are not eating any more. Food scraps are not a big garbage problem for the environment. Our natural world can get rid of food scraps. Insects and bacteria eat the food scraps and make them go away.

But this does not happen with other materials. Plastic is very toxic to the environment. It poisons the earth and the water. We use plastic for many things, such as combs or pens. Also, when we buy something from the supermarket, we get a plastic bag. As soon as we get home, we throw the bag away. Plastic is also used to make Styrofoam. All take-out coffee cups and fast-food boxes are made of Styrofoam. When we buy coffee and drink it on the street, we throw that cup away too.

Other garbage we throw away is metal. The cans for soft drinks or beer are made of aluminum. Aluminum is toxic too. The paper and wood we throw away are not toxic. But we have to cut down many trees every year to make paper and wood. Our environment suffers when there are no forests around. The air is less fresh, and the earth dries up. With no water in the earth, plants cannot grow.

Solutions to the garbage problem

We have to manage our waste and garbage better. If we throw away so many things, soon we will have no place to dump them.

The best thing to do is to reduce the amount of garbage. If we use less, we throw away less. For instance, we can buy food in big boxes and packages. Then we throw away only one box i every month or so. Otherwise, we throw away many small boxes or cans every day.

Similarly, we can reuse a lot of packaging. For example, we do not have to buy take-out coffee in Styrofoam cups. We can bring our own cup from home and fill it with fresh coffee. We also do not have to take the plastic bags from the supermarket. We can bring our own cloth bag from home instead. When we pack lunch, it is better to use a lunch box than a paper bag. Instead of paper plates, we can use real plates. We can clean up with a dishtowel, not a paper towel. We can use a compost bin for food scraps. In this way, the food gets back into the earth. It does not get mixed up with the regular garbage.

Finally, all paper, glass and metal we do use, we can recycle. In many countries, there are now recycling programs. In Germany, for example, people separate all glass bottles by color. Then they put the bottles into special bins that are on the street. The city collects the glass, cleans it, and reuses it. As well, in most countries, people recycle newspapers and cardboard. It is easy and efficient.

 

 

Indoor Pollution

Since the early eighties we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building-sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escapes when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Highlight Highlight Highlight|Remove Highlight|Dictionary

Striking the right note

Is perfect pitch a rare talent possessed solely by the likes of Beethoven? Kathryn Brown discusses this much sought-after musical ability.

The uncanny, if sometimes distracting, ability to name a solitary note out of the blue, without any other notes for reference, is a prized musical talent - and a scientific mystery. Musicians with perfect pitch - or, as many researchers prefer to call it, absolute pitch - can often play pieces by ear, and many can transcribe music brilliantly. That’s because they perceive the position of a note in the musical stave - its pitch - as clearly as the fact that they heard it. Hearing and naming the pitch go hand in hand.

By contrast, most musicians follow not the notes, but the relationship between them. They may easily recognise two notes as being a certain number of tones apart, but could name the higher note as an E only if they are told the lower one is a C, for example. This is relative pitch. Useful, but much less mysterious.

For centuries, absolute pitch has been thought of as the preserve of the musical elite. Some estimates suggest that maybe fewer than 1 in 2,000 people possess it. But a growing number of studies, from speech experiments to brain scans, are now suggesting that a knack for absolute pitch may be far more common, and more varied, than previously thought. ‘Absolute pitch is not an all or nothing feature,’ says Marvin, a music theorist at the University of Rochester in New York state. Some researchers even claim that we could all develop the skill, regardless of our musical talent. And their work may finally settle a decades-old debate about whether absolute pitch depends on melodious genes - or early music lessons.

Music psychologist Diana Deutsch at the University of California in San Diego is the leading voice. Last month at the Acoustical Society of America meeting in Columbus, Ohio, Deutsch reported a study that suggests we all have the potential to acquire absolute pitch - and that speakers of tone languages use it every day. A third of the world’s population - chiefly people in Asia and Africa - speak tone languages, in which a word’s meaning can vary depending on the pitch a speaker uses.

Deutsch and her colleagues asked seven native Vietnamese speakers and 15 native Mandarin speakers to read out lists of words on different days. The chosen words spanned a range of pitches, to force the speakers to raise and lower their voices considerably. By recording these recited lists and taking the average pitch for each whole word, the researchers compared the pitches used by each person to say each word on different days.

Both groups showed strikingly consistent pitch for any given word - often less than a quarter-tone difference between days. ‘The similarity,’ Deutsch says, ‘is mind-boggling.’ It’s also, she says, a real example of absolute pitch. As babies, the speakers learnt to associate certain pitches with meaningful words - just as a musician labels one tone A and another B - and they demonstrate this precise use of pitch regardless of whether or not they have had any musical training, she adds.

Deutsch isn’t the only researcher turning up everyday evidence of absolute pitch. At least three other experiments have found that people can launch into familiar songs at or very near the correct pitches. Some researchers have nicknamed this ability ‘absolute memory’, and they say it pops up on other senses, too. Given studies like these, the real mystery is why we don’t all have absolute pitch, says cognitive psychologist Daniel Levitin of McGill University in Montreal.

Over the past decade, researchers have confirmed that absolute pitch often runs in families. Nelson Freimer of the University of California in San Francisco, for example, is just completing a study that he says strongly suggests the right genes help create this brand of musical genius. Freimer gave tone tests to people with absolute pitch and to their relatives. He also tested several hundred other people who had taken early music lessons. He found that relatives of people with absolute pitch were far more likely to develop the skill than people who simply had the music lessons. There is clearly a familial aggregation of absolute pitch,’ Freimer says.

Freimer says some children are probably genetically predisposed toward absolute pitch - and this innate inclination blossoms during childhood music lessons. Indeed, many researchers now point to this harmony of nature and nurture to explain why musicians with absolute pitch show different levels of the talent.

Indeed, researchers are finding more and more evidence suggesting music lessons are critical to the development of absolute pitch. In a survey of 2,700 students in American music conservatories and college programmes, New York University geneticist Peter Gregersen and his colleagues found that a whopping 32 per cent of the Asian students reported having absolute pitch, compared with just 7 per cent of non-Asian students. While that might suggest a genetic tendency towards absolute pitch in the Asian population, Gregersen says that the type and timing of music lessons probably explains much of the difference.

For one thing, those with absolute pitch started lessons, on average, when they were five years old, while those without absolute pitch started around the age of eight. Moreover, adds Gregersen, the type of music lessons favoured in Asia, and by many of the Asian families in his study, such as the Suzuki method, often focus on playing by ear and learning the names of musical notes, while those more commonly used in the US tend to emphasise learning scales in a relative pitch way. In Japanese pre-school music programmes, he says, children often have to listen to notes played on a piano and hold up a coloured flag to signal the pitch. ‘There’s a distinct cultural difference,’ he says.

Deutsch predicts that further studies will reveal absolute pitch - in its imperfect, latent form - inside all of us. The Western emphasis on relative pitch simply obscures it, she contends. ‘It’s very likely that scientists will end up concluding that we’re all born with the potential to acquire very fine-grained absolute pitch. It’s really just a matter of life getting in the way.’

HOW DOES THE BIOLOGICAL CLOCK TICK?

Our life span is restricted. Everyone accepts this as ‘biologically’ obvious. ‘Nothing lives for ever!’ However, in this statement we think of artificially produced, technical objects, products which are subjected to natural wear and tear during use. This leads to the result that at some time or other the object stops working and is unusable (‘death’ in the biological sense). But are the wear and tear and loss of function of technical objects and the death of living organisms really similar or comparable? 

Our ‘dead’ products are ‘static’, closed systems. It is always the basic material which constitutes the object and which, in the natural course of things, is worn down and becomes ‘older’. Ageing in this case must occur according to the laws of physical chemistry and of thermodynamics. Although the same law holds for a living organism, the result of this law is not inexorable in the same way. At least as long as a biological system has the ability to renew itself it could actually become older without ageing; an organism is an open, dynamic system through which new material continuously flows. Destruction of old material and formation of new material are thus in permanent dynamic equilibrium. The material of which the organism is formed changes continuously. Thus our bodies continuously exchange old substance for new, just like a spring which more or less maintains its form and movement, but in which the water molecules are always different.

Thus ageing and death should not be seen as inevitable, particularly as the organism possesses many mechanisms for repair. It is not, in principle, necessary for a biological system to age and die. Nevertheless, a restricted life span, ageing, and then death are basic characteristics of life. The reason for this is easy to recognise: in nature, the existent organisms either adapt or are regularly replaced by new types. Because of changes in the genetic material (mutations) these have new characteristics and in the course of their individual lives they are tested for optimal or better adaptation to the environmental conditions. Immortality would disturb this system - it needs room for new and better life. This is the basic problem of evolution.

Every organism has a life span which is highly characteristic. There are striking differences in life span between different species, but within one species the parameter is relatively constant. For example, the average duration of human life has hardly changed in thousands of years. Although more and more people attain an advanced age as a result of developments in medical care and better nutrition, the characteristic upper limit for most remains 80 years. A further argument against the simple wear and tear theory is the observation that the time within which organisms age lies between a few days (even a few hours for unicellular organisms) and several thousand years, as with mammoth trees.

If a life span is a genetically determined biological characteristic, it is logically necessary to propose the existence of an internal clock, which in some way measures and controls the ageing process and which finally determines death as the last step in a fixed programme. Like the life span, the metabolic rate has for different organisms a fixed mathematical relationship to the body mass. In comparison to the life span this relationship is ‘inverted’: the larger the organism the lower its metabolic rate. Again this relationship is valid not only for birds, but also, similarly on average within the systematic unit, for all other organisms (plants, animals, unicellular organisms).

Animals which behave ‘frugally’ with energy become particularly old, for example, crocodiles and tortoises. Parrots and birds of prey are often held chained up. Thus they are not able to ‘experience life’ and so they attain a high life span in captivity. Animals which save energy by hibernation or lethargy (e.g. bats or hedgehogs) live much longer than those which are always active. The metabolic rate of mice can be reduced by a very low consumption of food (hunger diet). They then may live twice as long as their well fed comrades. Women become distinctly (about 10 per cent) older than men. If you examine the metabolic rates of the two sexes you establish that the higher male metabolic rate roughly accounts for the lower male life span. That means that they live life ‘energetically’ - more intensively, but not for as long.

It follows from the above that sparing use of energy reserves should tend to extend life. Extreme high performance sports may lead to optimal cardiovascular performance, but they quite certainly do not prolong life. Relaxation lowers metabolic rate, as does adequate sleep and in general an equable and balanced personality. Each of us can develop his or her own ‘energy saving programme’ with a little self-observation, critical self-control and, above all, logical consistency. Experience will show that to live in this way not only increases the life span but is also very healthy. This final aspect should not be forgotten.

 

 

The meaning and power of smell

The sense of smell, or olfaction, is powerful. Odours affect us on a physical, psychological and social level. For the most part, however, we breathe in the aromas which surround us without being consciously aware of their importance to us. It is only when the faculty of smell is impaired for some reason that we begin to realise the essential role the sense of smell plays in our sense of well-being

 

A survey conducted by Anthony Synott at Montreal’s Concordia University asked participants to comment on how important smell was to them in their lives. It became apparent that smell can evoke strong emotional responses. A scent associated with a good experience can bring a rush of joy, while a foul odour or one associated with a bad memory may make us grimace with disgust. Respondents to the survey noted that many of their olfactory likes and dislikes were based on emotional associations. Such associations can be powerful enough so that odours that we would generally label unpleasant become agreeable, and those that we would generally consider fragrant become disagreeable for particular individuals. The perception of smell, therefore, consists not only of the sensation of the odours themselves, but of the experiences and emotions associated with them.

Odours are also essential cues in social bonding. One respondent to the survey believed that there is no true emotional bonding without touching and smelling a loved one. In fact, infants recognise the odours of their mothers soon after birth and adults can often identify their children or spouses by scent. In one well-known test, women and men were able to distinguish by smell alone clothing worn by their marriage partners from similar clothing worn by other people. Most of the subjects would probably never have given much thought to odour as a cue for identifying family members before being involved in the test, but as the experiment revealed, even when not consciously considered, smells register.

In spite of its importance to our emotional and sensory lives, smell is probably the most undervalued sense in many cultures. The reason often given for the low regard in which smell is held is that, in comparison with its importance among animals, the human sense of smell is feeble and undeveloped. While it is true that the olfactory powers of humans are nothing like as fine as those possessed by certain animals, they are still remarkably acute. Our noses are able to recognise thousands of smells, and to perceive odours which are present only in extremely small quantities. 

 

Smell, however, is a highly elusive phenomenon. Odours, unlike colours, for instance, cannot be named in many languages because the specific vocabulary simply doesn’t exist. ‘It smells like . . . ,’ we have to say when describing an odour, struggling to express our olfactory experience. Nor can odours be recorded: there is no effective way to either capture or store them over time. In the realm of olfaction, we must make do with descriptions and recollections. This has implications for olfactory research. 

Most of the research on smell undertaken to date has been of a physical scientific nature. Significant advances have been made in the understanding of the biological and chemical nature of olfaction, but many fundamental questions have yet to be answered. Researchers have still to decide whether smell is one sense or two - one responding to odours proper and the other registering odourless chemicals in the air. Other unanswered questions are whether the nose is the only part of the body affected by odours, and how smells can be measured objectively given the nonphysical components. Questions like these mean that interest in the psychology of smell is inevitably set to play an increasingly important role for researchers.

However, smell is not simply a biological and psychological phenomenon. Smell is cultural, hence it is a social and historical phenomenon. Odours are invested with cultural values: smells that are considered to be offensive in some cultures may be perfectly acceptable in others. Therefore, our sense of smell is a means of, and model for, interacting with the world. Different smells can provide us with intimate and emotionally charged experiences and the value that we attach to these experiences is interiorised by the members of society in a deeply personal way. Importantly, our commonly held feelings about smells can help distinguish us from other cultures. The study of the cultural history of smell is, therefore, in a very real sense, an investigation into the essence of human culture.

 

 

Sleep helps reduce errors in memory

Sleep may reduce mistakes in memory, according to a first-of-its-kind study led by a scientist at Michigan State University.

The findings, which appear in the September issue of the journal Learning El Memory, have practical implications for many people, from students doing multiple-choice tests to elderly people confusing their medicine, says Kimberly Fenn, principal investigator and assistant professor of psychology.

‘It’s easy to muddle things in your mind,’ Fenn says. This research suggests that after sleep, you’re better able to pick out the incorrect parts of that memory.’ Fenn and colleagues from the University of Chicago and Washington University in St Louis studied the presence of incorrect or false memory in groups of college students. While previous research has shown that sleep improves memory, this study is the first one that looks at errors in memory, she said.

Study participants were ‘trained’ by being shown or listening to lists of words. Then, twelve hours later, they were shown individual words and asked to identify which words they had seen or heard in the earlier session. One group of students was trained at 10 a.m. and tested at 10 p.m. after the course of a normal sleepless day. Another group was trained at night and tested twelve hours later in the morning, after about six hours of sleep. Three experiments were conducted. In each experiment, the results showed that students who had slept did not have as many problems with false memory and chose fewer incorrect words. 

How does sleep help? The answer isn’t known, Fenn said, but she suspects it may be due to sleep strengthening the source of the memory. The source, or context in which the information is acquired, is a vital element of the memory process.

In other words, it may be easier to remember something if you can also remember where you first heard or saw it. Or perhaps the people who didn’t sleep as much during the study received so much other information during the day that this affected their memory ability, Fenn said.

Further research is needed, she said, adding that she plans to study different population groups, particularly the elderly. ‘We know older individuals generally have worse memory performance than younger individuals.

We also know from other research that elderly individuals tend to be more prone to false memories,’ Fenn said. ‘Given the work we’ve done, it’s possible that sleep may actually help them to reject this false information. And potentially this could help to improve their quality of life.’

Early Childhood Education

New Zealand's National Party spokesman on education, Dr Lockwood Smith, recently visited the US and Britain. Here he reports on the findings of his trip and what they could mean for New Zealand's education policy

 

‘Education To Be More' was published last August. It was the report of the New Zealand Government's Early Childhood Care and Education Working Group. The report argued for enhanced equity of access and better funding for childcare and early childhood education institutions. Unquestionably, that's a real need; but since parents don't normally send children to pre-schools until the age of three, are we missing out on the most important years of all? 

A 13 year study of early childhood development at Harvard University has shown that, by the age of three, most children have the potential to understand about 1000 words - most of the language they will use in ordinary conversation for the rest of their lives.

Furthermore, research has shown that while every child is born with a natural curiosity, if can be suppressed dramatically during the second and third years of life. Researchers claim that the human personality is formed during the first two years of life, and during the first three years children learn the basic skills they will use in all their later learning both at home and at school. Once over the age of three, children continue to expand on existing knowledge of the world.

It is generally acknowledged that young people from poorer socio-economic backgrounds fend to do less well in our education system. That's observed not just in New Zealand, but also in Australia, Britain and America. In an attempt to overcome that educational under-achievement, a nationwide programme called 'Headstart' was launched in the United Slates in 1965. A lot of money was poured into it. It took children into pre-school institutions at the age of three and was supposed to help the children of poorer families succeed in school.

Despite substantial funding, results have been disappointing. It is thought that there are two explanations for this. First, the programme began too late. Many children who entered it at the age of three were already behind their peers in language and measurable intelligence. Second, the parents were not involved. At the end of each day, 'Headstart' children returned to the same disadvantaged home environment.

As a result of the growing research evidence of the importance of the first three years of a child's life and the disappointing results from 'Headstart', a pilot programme was launched in Missouri in the US that focused on parents as the child's first teachers. The 'Missouri' programme was predicated on research showing that working with the family, rather than bypassing the parents, is the most effective way of helping children get off to the best possible start in life. The four-year pilot study included 380 families who were about to have their first child and who represented a cross-section of socio-economic status, age and family configurations. They included single-parent and two-parent families, families in which both parents worked, and families with either the mother or father at home.

The programme involved trained parent- educators visiting the parents' home and working with tire parent, or parents, and the child. Information on child development, and guidance on things to look for and expect as the child grows were provided, plus guidance in fostering the child's intellectual, language, social and motor-skill development. Periodic check-ups of the child's educational and sensory development (hearing and vision) were made to detect possible handicaps that interfere with growth and development. Medical problems were referred to professionals.

Parent-educators made personal visits to homes and monthly group meetings were held with other new parents to share experience and discuss topics of interest. Parent resource centres, located in school buildings, offered learning materials for families and facilitators for child core.

At the age of three, the children who had been involved in the 'Missouri' programme were evaluated alongside a cross-section of children selected from the same range of socio-economic backgrounds and family situations, and also a random sample of children that age. The results were phenomenal. By the age of three, the children in the programme were significantly more advanced in language development than their peers, had made greater strides in problem solving and other intellectual skills, and were further along in social development, tn fact, the average child on the programme was performing at the level of the top 15 to 20 per cent of their peers in such things as auditory comprehension, verbal ability and language ability.

Most important of all, the traditional measures of 'risk', such as parents' age and education, or whether they were a single parent, bore little or no relationship to the measures of achievement and language development. Children in the programme performed equally well regardless of socio-economic disadvantages. Child abuse was virtually eliminated. The one factor that was found to affect the child's development was family stress leading to a poor quality of parent-child interaction. That interaction was not necessarily bad in poorer families.

These research findings are exciting. There is growing evidence in New Zealand that children from poorer socio-economic backgrounds are arriving at school less well developed and that our school system tends to perpetuate that disadvantage. The initiative outlined above could break that cycle of disadvantage. The concept of working with parents in their homes, or at their place of work, contrasts quite markedly with the report of the Early Childhood Care and Education Working Group. Their focus is on getting children and mothers access to childcare and institutionalised early childhood education. Education from the age of three to five is undoubtedly vital, but without a similar focus on parent education and on the vital importance of the first three years, some evidence indicates that it will not be enough to overcome educational inequity. 

 

PRIVATE SCHOOLS

Most countries’ education systems have had what you might call educational disasters, but, sadly, in many areas of certain countries these ‘disasters’ are still evident today. The English education system is unique due to the fact that there are still dozens of schools which are known as private schools and they perpetuate privilege and social division. Most countries have some private schools for the children of the wealthy; England is able to more than triple the average number globally. England has around 3,000 private schools and just under half a million children are educated at them whilst some nine million children are educated at state schools. The over­whelming majority of students at private schools also come from middle-class families.

The result of this system is evident and it has much English history embedded within it. The facts seem to speak for themselves. In the private system almost half the students go on to University, whilst in the state sys­tem only about eight per cent make it to further educa­tion. However, statistics such as these can be deceptive due to the fact that middle-class children do better at examinations than working class ones, and most of them stay on at school after 16. Private schools therefore have the advantage over state schools as they are entirely ‘middle class’, and this creates an environment of success where students work harder and apply them­selves more diligently to their school work.

Private schools are extortionately expensive, being as much as £18,000 a year at somewhere such as Harrow or Eton, where Princes William and Harry attended, and at least £8,000 a year almost everywhere else. There are many parents who are not wealthy or even comfortably off but are willing to sacrifice a great deal in the cause of their children’s schooling. It baffles many people as to why they need to spend such vast amounts when there are perfectly acceptable state schools that don’t cost a penny. One father gave his reasoning for sending his son to a private school, ‘If my son gets a five-percent-better chance of going to University then that may be the dif­ference between success and failure.” It would seem to the average person that a £50,000 minimum total cost of second level educa­tion is a lot to pay for a five-percent-better chance. Most children, given the choice, would take the money and spend it on more enjoyable things rather than shelling it out on a school that is too posh for its own good

However, some say that the real reason that parents fork out the cash is prejudice: they don’t want their little kids mixing with the “workers”, or picking up an undesirable accent. In addition to this, it wouldn’t do if at the next din­ner party all the guests were boasting about sending their kids to the same place where the son of the third cousin of Prince Charles is going, and you say your kid is going to the state school down the road, even if you could pocket the money for yourself instead, and, as a result, be able to serve the best Champagne with the smoked salmon and duck.

It is a fact, however, that at many of the best private schools, your money buys you something. One school, with 500 pupils, has 11 science laboratories; another school with 800 pupils, has 30 music practice rooms; another has 16 squash courts, and yet another has its own beach. Private schools spend £300 per pupil a year on invest­ment in buildings and facilities; the state system spends less than £50. On books, the ratio is 3 to 1.

One of the things that your money buys which is difficult to quantify is the appearance of the school, the way it looks. Most private schools that you will find are set in beautiful, well-kept country houses, with extensive grounds and gardens. In comparison with the state schools, they tend to look like castles, with the worst of the state schools looking like public lavatories, perhaps even tiled or covered in graffiti. Many may even have an architectural design that is just about on the level of an industrial shed.

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

The MIT factor: celebrating 150 years of maverick genius

The Massachusetts Institute of Technology has led the world into the future for 150 years with scientific innovations.

The musician Yo-Yo Ma’s cello may not be the obvious starting point for a journey into one of the world’s great universities. But, as you quickly realise when you step inside the Massachusetts Institute of Technology, there’s precious little going on that you would normally see on a university campus. The cello, resting in a corner of MIT’s celebrated media laboratory — a hub of creativity — looks like any other electric classical instrument. But it is much more. Machover, the composer, teacher and inventor responsible for its creation, calls it a ‘hyperinstrument’, a sort of thinking machine that allows Ma and his cello to interact with one another and make music together. ‘The aim is to build an instrument worthy of a great musician like Yo-Yo Ma that can understand what he is trying to do and respond to it,’ Machover says. The cello has numerous sensors across its body and by measuring the pressure, speed and angle of the virtuoso’s performance it can interpret his mood and engage with it, producing extraordinary new sounds. The virtuoso cellist frequently performs on the instrument as he tours around the world. 

Machover’s passion for pushing at the boundaries of the existing world to extend and unleash human potential is not a bad description of MIT as a whole. This unusual community brings highly gifted, highly motivated individuals together from a vast range of disciplines, united by a common desire: to leap into the dark and reach for the unknown.

The result of that single unifying ambition is visible all around. For the past 150 years, MIT has been leading the world into the future. The discoveries of its teachers and students have become the common everyday objects that we now all take for granted. The telephone, electromagnets, radars, high-speed photography, office photocopiers, cancer treatments, pocket calculators, computers, the Internet, the decoding of the human genome, lasers, space travel ... the list of innovations that involved essential contributions from MIT and its faculty goes on and on.

From the moment MIT was founded by William Barton Rogers in 1861, it was clear what it was not. While Harvard stuck to the English model of a classical education, with its emphasis on Latin and Greek, MIT looked to the German system of learning based on research and hands-on experimentation. Knowledge was at a premium, but it had to be useful.

This down-to-earth quality is enshrined in the school motto, Mens et manus - Mind and hand - as well as its logo, which shows a gowned scholar standing beside an ironmonger bearing a hammer and anvil. That symbiosis of intellect and craftsmanship still suffuses the institute’s classrooms, where students are not so much taught as engaged and inspired.

Take Christopher Merrill, 21, a third-year undergraduate in computer science. He is spending most of his time on a competition set in his robotics class. The contest is to see which student can most effectively program a robot to build a house out of blocks in under ten minutes. Merrill says he could have gone for the easiest route - designing a simple robot that would build the house quickly. But he wanted to try to master an area of robotics that remains unconquered — adaptability, the ability of the robot to rethink its plans as the environment around it changes, as would a human.

‘I like to take on things that have never been done before rather than to work in an iterative way just making small steps forward,’ he explains.

Merrill is already planning the start-up he wants to set up when he graduates in a year’s time. He has an idea for an original version of a contact lens that would augment reality by allowing consumers to see additional visual information. He is fearful that he might be just too late in taking his concept to market, as he has heard that a Silicon Valley firm is already developing something similar. As such, he might become one of many MIT graduates who go on to form companies that fail. Alternatively, he might become one of those who go on to succeed in spectacular fashion. And there are many of them. A survey of living MIT alumni* found that they have formed 25,800 companies, employing more than three million people, including about a quarter of the workforce of Silicon Valley. What MIT delights in is taking brilliant minds from around the world in vastly diverse disciplines and putting them together. You can see that in its sparkling new David Koch Institute for Integrative Cancer Research, which brings scientists, engineers and clinicians under one roof.

Or in its Energy Initiative, which acts as a bridge for MIT’s combined work across all its five schools, channelling huge resources into the search for a solution to global warming. It works to improve the efficiency of existing energy sources, including nuclear power. It is also forging ahead with alternative energies from solar to wind and geothermal, and has recently developed the use of viruses to synthesise batteries that could prove crucial in the advancement of electric cars.

In the words of Tim Berners-Lee, the Briton who invented the World Wide Web, ‘It’s not just another university.

Even though I spend my time with my head buried in the details of web technology, the nice thing is that when I do walk the corridors, I bump into people who are working in other fields with their students that are fascinating, and that keeps me intellectually alive.’

adapted from the Guardian

* people who have left a university or college after completing their studies there

 

 

Numeration


One of the first great intellectual feats of a young child is learning how to talk, closely followed by learning how to count. From earliest childhood we are so bound up with our system of numeration that it is a feat of imagination to consider the problems faced by early humans who had not yet developed this facility. Careful consideration of our system of numeration leads to the conviction that, rather than being a facility that comes naturally to a person, it | is one of the great and remarkable achievements of the human race.

It is impossible to learn the sequence of events that led to our developing the concept of number. Even the earliest of tribes had a system of numeration that, if not advanced, was sufficient for the tasks that they had to perform. Our ancestors had little use for actual numbers; instead their considerations would have been more of the kind Is this enough? rather than How many? when they were engaged in food gathering, for example. However, when early humans first began to reflect on the nature of things around them, they discovered that they needed an idea of number simply to keep their thoughts in order. As they began to settle, grow plants and herd animals, the need for a sophisticated number system became paramount. It will never be known how and when this numeration ability developed, but it is certain that numeration was well developed by the time humans had formed even semipermanent settlements.

Evidence of early stages of arithmetic and numeration can be readily found. The indigenous peoples of Tasmania were only able to count one, two, many; those of South Africa counted one, two, two and one, two twos, two twos and one, and so on. But in real situations the number and words are often accompanied by gestures to help resolve any confusion. For example, when using the one, two, many type of system, the word many would mean, Look at my hands and see how many fingers I am showing you. This basic approach is limited in the range of numbers that it can express, but this range will generally suffice when dealing with the simpler aspects of human existence.

The lack of ability of some cultures to deal with large numbers is not really surprising. European languages, when traced back to their earlier version, are very poor in number words and expressions. The ancient Gothic word for ten, tachund, is used to express the number 100 as tachund tachund. By the seventh century, the word teon had become interchangeable with the tachund or hund of the Anglo-Saxon language, and so 100 was denoted as hund leonlig, or ten times ten. The average person in the seventh century in Europe was not as familiar with numbers as we are today. In fact, to qualify as a witness in a court of law a man had to be able to count to nine!

Perhaps the most fundamental step in developing a sense of number is not the ability to count, but rather to see that a number is really an abstract idea instead of a simple attachment to a group of particular objects. It must have been within the grasp of the earliest humans to conceive that four birds are distinct from two birds; however, it is not an elementary step to associate the number 4, as connected with four birds, to the number 4, as connected with four rocks. Associating a number as one of the qualities of a specific object is a great hindrance to the development of a true number sense. When the number 4 can be registered in the mind as a specific word, independent of the object being referenced, the individual is ready to take the first step toward the development of a notational system for numbers and, from there, to arithmetic.

Traces of the very first stages in the development of numeration can be seen in several living languages today. The numeration system of the Tsimshian language in British Columbia contains seven distinct sets of words for numbers according to the class of the item being counted: for counting flat objects and animals, for round objects and time, for people, for long objects and trees, for canoes, for measures, and for counting when no particular object is being numerated. It seems that the last is a later development while the first six groups show the relics of an older system. This diversity of number names can also be found in some widely used languages such as Japanese.

Intermixed with the development of a number sense is the development of an ability to count. Counting is not directly related to the formation of a number concept because it is possible to count by matching the items being counted against a group of pebbles, grains of corn, or the counter's fingers. These aids would have been indispensable to very early people who would have found the process impossible without some form of mechanical aid. Such aids, while different, are still used even by the most educated in today's society due to their convenience.

All counting ultimately involves reference to something other than the things being counted. At first it may have been grains or pebbles but now it is a memorised sequence of words that happen to be the names of the numbers.

 

 

The Nature of Genius

There has always been ari interest in geniuses and prodigies. The word ‘genius’, from the Latin gens (= family) and the term ‘genius’, meaning ‘begetter’, comes from the early Roman cult of a divinity as the head of the family. In its earliest form, genius was concerned with the ability of the head of the family, the paterfamilias, to perpetuate himself. Gradually, genius came to represent a person’s characteristics and thence an individual’s highest attributes derived from his ‘genius’ or guiding spirit. Today, people still look to stars or genes, astrology or genetics, in the hope of finding the source of exceptional abilities or personal characteristics.

The concept of genius and of gifts has become part of our folk culture, and attitudes are ambivalent towards them. We envy the gifted and mistrust them. In the mythology of giftedness, it is popularly believed that if people are talented in one area, they must be defective in another, that intellectuals are impractical, that prodigies burn too brightly too soon and burn out, that gifted people are eccentric, that they are physical weaklings, that there’s a thin line between genius and madness, that genius runs in families, that the gifted are so clever they don’t need special help, that giftedness is the same as having a high IQ, that some races are more intelligent or musical or mathematical than others, that genius goes unrecognised and unrewarded, that adversity makes men wise or that people with gifts have a responsibility to use them. Language has been enriched with such terms as ‘highbrow’, ‘egghead’, ‘blue-stocking’, ‘wiseacre’, ‘know-all’, ‘boffin’ and, for many, ‘intellectual’ is a term of denigration.

The nineteenth century saw considerable interest in the nature of genius, and produced not a few studies of famous prodigies. Perhaps for us today, two of the most significant aspects of most of these studies of genius are the frequency with which early encouragement and teaching by parents and tutors had beneficial effects on the intellectual, artistic or musical development of the children but caused great difficulties of adjustment later in their lives, and the frequency with which abilities went unrecognised by teachers and schools. However, the difficulty with the evidence produced by these studies, fascinating as they are in collecting together anecdotes and apparent similarities and exceptions, is that they are not what we would today call norm-referenced. In other words, when, for instance, information is collated about early illnesses, methods of upbringing, schooling, etc., we must also take into account information from other historical sources about how common or exceptional these were at the time. For instance, infant mortality was high and life expectancy much shorter than today, home tutoring was common in the families of the nobility and wealthy, bullying and corporal punishment were common at the best independent schools and, for the most part, the cases studied were members of the privileged classes. It was only with the growth of paediatrics and psychology in the twentieth century that studies could be carried out on a more objective, if still not always very scientific, basis.

Geniuses, however they are defined, are but the peaks which stand out through the mist of history and are visible to the particular observer from his or her particular vantage point. Change the observers and the vantage points, clear away some of the mist, and a different lot of peaks appear. Genius is a term we apply to those whom we recognise for their outstanding achievements and who stand near the end of the continuum of human abilities which reaches back through the mundane and mediocre to the incapable. There is still much truth in Dr Samuel Johnson’s observation, The true genius Is a mind of large general powers, accidentally determined to some particular direction’. We may disagree with the ‘general’, for we doubt if all musicians of genius could have become scientists of genius or vice versa, but there is no doubting the accidental determination which nurtured or triggered their gifts into those channels into which they have poured their powers so successfully. Along the continuum of abilities are hundreds of thousands of gifted men and women, boys and girls.

What we appreciate, enjoy or marvel at in thè works of genius or the achievements of prodigies are the manifestations of skills or abilities which are similar to, but so much superior to, our own. But that their minds are not different from our own is demonstrated by the fact that the hard-won discoveries of scientists like Kepler or Einstein become the commonplace knowledge of schoolchildren and the once outrageous shapes and colours of an artist like Paul Klee so soon appear on the fabrics we wear. This does not minimise the supremacy of their achievements, which outstrip our own as the sub-four-minute milers outstrip our jogging.

To think of geniuses and the gifted as having uniquely different brains is only reasonable If we accept that each human brain is uniquely different. The purpose of instruction is to make US even more different from one another, and in the process of being educated we can learn from the achievements of those more gifted than ourselves. But before we try to emulate geniuses or encourage our children to do so we should note that some of the things we learn from them may prove unpalatable. We may envy their achievements and fame, but we should also recognise the price they may have paid in terms of perseverance, single-mindedness, dedication, restrictions on their personal lives, the demands upon their energies and time, and how often they had to display great courage to preserve their integrity or to make their way to the top.

Genius and giftedness are relative descriptive terms of no real substance. We may, at best, give them some precision by defining them and placing them in a context but, whatever we do, we should never delude ourselves into believing that gifted children or geniuses are different from the rest of humanity, save in the degree to which they have developed the performance of their abilities.

 

 

Do literate women make better mothers?

Children in developing countries are healthier and more likely to survive past the age of five when their mothers can read and write. Experts In public health accepted this idea decades ago, but until now no one has been able to show that a woman's ability to read in Itself Improves her children’s chances of survival.

Most literate women learnt to read In primary school, and the fact that a woman has had an education may simply indicate her family’s wealth or that It values Its children more highly. Now a long-term study carried out In Nicaragua has eliminated these factors by showing that teaching reading to poor adult women, who would otherwise have remained Illiterate, has a direct effect on their children’s health and survival.

In 1979, the government of Nicaragua established a number of social programmes, including a National Literacy Crusade. By 1985, about 300,000 Illiterate adults from all over the country, many of whom had never attended primary school, had learnt how to read, write and use numbers.

During this period, researchers from the Liverpool School of Tropical Medicine, the Central American Institute of Health In Nicaragua, the National Autonomous University of Nicaragua and the Costa Rican Institute of Health Interviewed nearly 3,000 women, some of whom had learnt to read as children, some during the literacy crusade and some who had never learnt at all. The women were asked how many children they had given birth to and how many of them had died In Infancy. The research teams also examined the surviving children to find out how well-nourished they were.

The Investigators' findings were striking. In the late 1970s, the infant mortality rate for the children of Illiterate mothers was around 110 deaths per thousand live births. At this point In their lives, those mothers who later went on to learn to read had a similar level Of child mortality (105/1000). For women educated in primary school, however, the Infant mortality rate was significantly lower, at 80 per thousand.

In 1985, after the National Literacy Crusade had ended, the infant mortality figures for those who remained illiterate and for those educated In primary school remained more or less unchanged. For those women who learnt to read through the campaign, the infant mortality rate was 84 per thousand, an impressive 21 points lower than for those women who were still Illiterate. The children of the newly-literate mothers were also better nourished than those of women who could not read.

Why are the children of literate mothers better off? According to Peter Sandiford of the Liverpool School of Tropical Medicine, no one Knows for certain. Child health was not on the curriculum during the women’s lessons, so fie and his colleagues are looking at other factors. They are working with the same group of 3,000 women, to try to find out whether reading mothers make better use of hospitals and clinics, opt for smaller families, exert more control at home, learn modern childcare techniques more quickly, or whether they merely have more respect for themselves and their children.

The Nicaraguan study may have important implications for governments and aid agencies that need to know where to direct their resources. Sandiford says that there is increasing evidence that female education, at any age, is "an important health intervention in its own right’. The results of the study lend support to the World Bank's recommendation that education budgets in developing countries should be increased, not just to help their economies, but also to improve child health.

'We’ve known for a long time that maternal education is important,’ says John Cleland of the London School of Hygiene and Tropical Medicine. ‘But we thought that even if we started educating girls today, we'd have to wait a generation for the pay off. The Nicaraguan study suggests we may be able to bypass that.'

Cleland warns that the Nicaraguan crusade was special in many ways, and similar campaigns elsewhere might not work as well. It is notoriously difficult to teach adults skills that do not have an immediate impact on their everyday lives, and many literacy campaigns in other countries have been much less successful. 'The crusade was part of a larger effort to bring a better life to the people,’ says Cleland. Replicating these conditions in other countries will be a major challenge for development workers.

 

 

Children Tested to Destruction?

English primary school pupils subjected to more tests than in any other country

English primary school pupils have to deal with unprecedented levels of pressure as they face tests more frequently, at a younger age, and in more subjects than children from any other country, according to one of the biggest international education inquiries in decades. The damning indictment of England’s primary education system revealed that the country’s children are now the most tested in the world. From their very earliest days at school they must navigate a set-up whose trademark is’high stakes’testing, according to a recent report

Parents are encouraged to choose schools for their children based on league tables of test scores. But this puts children under extreme pressure which could damage their motivation and self-esteem, as well as encouraging schools to’teach to the test’at the expense of pupils’wider learning, the study found. The findings are part of a two-year inquiry – led by Cambridge University – into English primary schools. Other parts of the UK and countries such as France, Norway and Japan used testing but it was,’less intrusive, less comprehensive, and considerably less frequent’, Cambridge’s Primary Review concluded.

England was unique in using testing to control what is taught in schools, to monitor teaching standards and to encourage parents to choose schools based on the results of the tests, according to Kathy Flail, from the National University of Ireland in Cork, and Kamil Ozerk, from the University of Oslo, who conducted the research. ‘Assessment in England, compared to our other reviewed countries, is pervasive, highly consequential, and taken by officialdom and the public more generally to portray objectively the actual quality of primary education in schools,’their report concluded. Teachers’leaders said the testing regime was ‘past its sell-by date’and called for a fundamental review of assessment.

Steve Sinnott, General Secretary of the National Union of Teachers, said England’s testing system was having a’devastating’impact on schools.’Uniquely, England is a country where testing is used to police schools and control what is taught,’ he said. ‘When it comes to testing in England, the tail wags the dog. It is patently absurd that even the structure and content of education is shaped by the demands of the tests. I call on the Government to initiate a full and independent review of the impact of the current testing system on schools and on children’s learning and to be prepared to dismantle a system which is long past its sell-by date.’

John Dunford, General Secretary of the Association of School and College Leaders, warned that the tests were having a damaging effect on pupils. The whole testing regime is governed by the need to produce league tables,’ he said. ‘It has more to do with holding schools to account than helping pupils to progress.’

The fear that many children were suffering intolerable stress because of the tests was voiced by Mick Brookes, General Secretary of the National Association of Head Teachers. There are schools that start rehearsing for key stage two SATs [Standard Assessment Tests] from the moment the children arrive in September. That’s just utterly ridiculous,’he said. There are other schools that rehearse SATs during Christmas week.These are young children we are talking about They should be having the time of their lives at school not just worrying about tests. It is the breadth and richness of the curriculum that suffers. The consequences for schools not reaching their targets are dire – heads can lose their jobs and schools can be closed down. With this at stake it’s not surprising that schools let the tests take over’.

David Laws, the Liberal Democrat schools spokesman, said:The uniquely high stakes placed on national tests mean that many primary schools have become too exam focused.’ However, the Government rejected the criticism. The idea that children are over-tested is not a view that the Government accepts,’a spokesman said. The reality is that children spend a very small percentage of their time in school being tested. Seeing that children leave school up to the right standard in the basics is the highest priority of the Government.’

In another child-centred initiative, both major political parties in the UK – Labour and the Conservatives – have announced plans to make Britain more child-friendly following a report by UNICEF which ranked the UK the worst place to be a child out of 21 rich nations.

Parents were warned that they risked creating a generation of’battery-farmed children’ by always keeping them indoors to ensure their safety. The family’s minister, Kevin Brennan, called for an end to the’cotton wool’culture and warned that children would not learn to cope with risks if they were never allowed to play outdoors.

 

 

Learning color words

Young children struggle with color concepts, and the reason for this may have something to do with how we use the words that describe them.

In the course of the first few years of their lives, children who are brought up in English- speaking homes successfully master the use of hundreds of words. Words for objects, actions, emotions, and many other aspects of the physical world quickly become part of their infant repertoire. For some reason, however, when it comes to learning color words, the same children perform very badly. At the age of four months, babies can distinguish between basic color categories. Yet it turns out they do this in much the same way as blind children. "Blue" and "yellow" appear in older children's expressive language in answer to questions such as "What color is this?", but their mapping of objects to individual colors is haphazard and interchangeable. If shown a blue cup and asked about its color, typical two-year-olds seem as likely to come up with "red" as "blue." Even after hundreds of training trials, children as old as four may still end up being unable to accurately sort objects by color.

In an effort to work out why this is, cognitive scientists at Stanford University in California hypothesized that children's incompetence at color-word learning may be directly linked to the way these words are used in English. While word order for color adjectives varies, they are used overwhelmingly in pre-nominal position (e.g. "blue cup"); in other words, the adjective comes before the noun it is describing. This is in contrast to post-nominal position (e.g. "The cup is blue") where the adjective comes after the noun. It seems that the difficulty children have may not be caused by any unique property of color, or indeed, of the world. Rather, it may simply come down to the challenge of having to make predictions from color words to the objects they refer to, instead of being able to make predictions from the world of objects to the color words.

To illustrate, the word "chair" has a meaning that applies to the somewhat varied set of entities in the world that people use for sitting on. Chairs have features, such as arms and legs and backs, that are combined to some degree in a systematic way; they turn up in a range of chairs of different shapes, sizes, and ages. It could be said that children learn to narrow down the set of cues that make up a chair and in this way they learn the concept associated with that word. On the other hand, color words tend to be unique and not bound to other specific co-occurring features; there is nothing systematic about color words to help cue their meaning. In the speech that adults direct at children, color adjectives occur pre-nominally ("blue cup") around 70 percent of the time. This suggests that most of what children hear from adults will, in fact, be unhelpful in learning what color words refer to.

To explore this idea further, the research team recruited 41 English children aged between 23 and 29 months and carried out a three- phase experiment. It consisted of a pre-test, followed by training in the use of color words, and finally a post-test that was identical to the pre-test. The pre- and post-test materials comprised six objects that were novel to the children. There were three examples of each object in each of three colors—red, yellow, and blue. The objects were presented on trays, and in both tests, the children were asked to pick out objects in response to requests in which the color word was either a prenominal ("Which is the red one?") or a post-nominal ("Which one is red?").

In the training, the children were introduced to a "magic bucket" containing five sets of items familiar to 26-month-olds (balls, cups, crayons, glasses, and toy bears) in each of the three colors. The training was set up so that half the children were presented with the items one by one and heard them labelled with color words used pre-nominally ("This is a red crayon"), while the other half were introduced to the same items described with a post-nominal color word ("This crayon is red"). After the training, the children repeated the selection task on the unknown items in the post-test. To assess the quality of children's understanding of the color words, and the effect of each type of training, correct choices on items that were consistent across the pre- and post-tests were used to measure children's color knowledge.

Individual analysis of pre- and post-test data, which confirmed parental vocabulary reports, showed the children had at least some knowledge of the three colour words: they averaged two out of three correct choices in response to both pre- and post-nominal question types, which, it has been pointed out, is better than chance. When children's responses to the question types were assessed independently, performance was at its most consistent when children were both trained and tested on post-nominal adjectives, and worst when trained on pre-nominal adjectives and tested on post-nominal adjectives. Only children who had been trained with post- nominal color-word presentation and then tested with post-nominal question types were significantly more accurate than chance. Comparing the pre- and post-test scores across each condition revealed a significant decline in performance when children were both pre- and post-tested with questions that placed the color words pre-nominally.

As predicted, when children are exposed to color adjectives in post-nominal position, they learn them rapidly (after just five training trials per color); when they are presented with them pre-nominally, as English overwhelmingly tends to do, children show no signs of learning.

 

 

Learning by Examples

Learning Theory is rooted in the work of Ivan Pavlov, the famous scien­tist who discovered and documented the principles governing how animals (humans included) learn in the 1900s. Two basic kinds of learning or condi­tioning occur, one of which is famously known as the classical conditioning. Classical conditioning happens when an animal learns to associate a neutral stimulus (signal) with a stimulus that has intrinsic meaning based on how closely in time the two stimuli are presented. The classic example of classical conditioning is a dog's ability to associate the sound of a bell (something that originally has no meaning to the dog) with the presentation of food (something that has a lot of meaning to the dog) a few moments later. Dogs are able to learn the association between bell and food, and will salivate im­mediately after hearing the bell once this connection has been made. Years of learning research have led to the creation of a highly precise learning theory that can be used to understand and predict how and under what cir­cumstances most any animal will learn, including human beings, and eventu­ally help people figure out how to change their behaviours.

Role models are a popular notion for guiding child development, but in re­cent years very interesting research has been done on learning by examples in other animals. If the subject of animal learning is taught very much in terms of classical or operant conditioning, it places too much emphasis on how we allow animals to learn and not enough on how they are equipped to learn. To teach a course of mine, I have been dipping profitably into a very interesting and accessible compilation of papers on social learning in mammals, including chimps and human children, edited by Heyes and Galef (1996).

The research reported in one paper started with a school field trip to Israel to a pine forest where many pine cones were discovered, stripped to the central core. So the investigation started with no weighty theoretical intent, but was directed at finding out what was eating the nutritious pine seeds and how they managed to get them out of the cones. The culprit proved to be the versatile and athletic black rat,(Rattus rattus), and the technique was to bite each cone scale off at its base, in sequence from base to tip following the spiral growth pattern of the cone.

Urban black rats were found to lack the skill and were unable to learn it even if housed with experienced cone strippers. However, infants of urban mothers cross-fostered by stripper mothers acquired the skill, whereas in­fants of stripper mothers fostered by an urban mother could not. Clearly the skill had to be learned from the mother. Further elegant experiments showed that naive adults could develop the skill if they were provided with cones from which the first complete spiral of scales had been removed; rather like our new photocopier which you can work out how to use once someone has shown you how to switch it on. In the case of rats, the young­sters take cones away from the mother when she is still feeding on them, allowing them to acquire the complete stripping skill.

A good example of adaptive bearing we might conclude, but let’s see the economies. This was determined by measuring oxygen uptake of a rat strip­ping a cone in a metabolic chamber to calculate energetic cost and compar­ing it with the benefit of the pine seeds measured by calorimeter. The cost proved to be less than 10% of the energetic value of the cone. An acceptable profit margin.

A paper in 1996, Animal Behaviour by Bednekoff and Baida, provides a differ­ent view of the adaptiveness of social learning. It concerns the seed caching behaviour of Clark's Nutcracker (Nucifraga columbiana) and the Mexican Jay (Aphelocoma ultramarina). The former is a specialist, caching 30,000 or so seeds in scattered locations that it will recover over the months of winter; the Mexican Jay will also cache food but is much less dependent upon this than the Nutcracker. The two species also differ in their social structure: the Nutcracker being rather solitary while the Jay forages in social groups.

The experiment is to discover not just whether a bird can remember where it hid a seed but also if it can remember where it saw another bird hide a seed. The design is slightly comical with a cacher bird wandering about a room with lots of holes in the floor hiding food in some of the holes, while watched by an observer bird perched in a cage. Two days later, cachers and observers are tested for their discovery rate against an estimated random performance. In the role of cacher, not only the Nutcracker but also the less specialised Jay performed above chance; more surprisingly, however, jay obser­vers were as successful as jay cachers whereas nutcracker observers did no better than chance. It seems that, whereas the Nutcracker is highly adapted at remembering where it hid its own seeds, the social living Mexican Jay is more adept at remembering, and so exploiting, the caches of others.

 

 

Rain-forests and the implications for course design

Adults and children are frequently confronted with statements about the alarming rate of loss of tropical rainforests. For example, one graphic illustration to which children might readily relate is the estimate that rainforests are being destroyed at a rate equivalent to one thousand football fields every forty minutes - about the duration of a normal classroom period. In the face of the frequent and often vivid media coverage, it is likely that children will have formed ideas about rainforests - what and where they are, why they are important, what endangers them - independent of any formal tuition. It is also possible that some of these ideas will be mistaken.

Many studies have shown that children harbour misconceptions about ‘pure’, curriculum science. These misconceptions do not remain isolated but become incorporated into a multifaceted, but organised, conceptual framework, making it and the component ideas, some of which are erroneous, more robust but also accessible to modification. These ideas may be developed by children absorbing ideas through the popular media. Sometimes this information may be erroneous. It seems schools may not be providing an opportunity for children to re-express their ideas and so have them tested and refined by teachers and their peers.

Despite the extensive coverage in the popular media of the destruction of rainforests, little formal information is available about children’s ideas in this area. The aim of the present study is to start to provide such information, to help teachers design their educational strategies to build upon correct ideas and to displace misconceptions and to plan programmes in environmental studies in their schools.

The study surveys children’s scientific knowledge and attitudes to rainforests. Secondary school children were asked to complete a questionnaire containing five open-form questions. The most frequent responses to the first question were descriptions which are self-evident from the term ‘rainforest’. Some children described them as damp, wet or hot. The second question concerned the geographical location of rainforests. The commonest responses were continents or countries: Africa (given by 43% of children), South America (30%), Brazil (25%). Some children also gave more general locations, such as being near the Equator. 

Responses to question three concerned the importance of rainforests. The dominant idea, raised by 64% of the pupils, was that rainforests provide animals with habitats. Fewer students responded that rainforests provide plant habitats, and even fewer mentioned the indigenous populations of rainforests. More girls (70%) than boys (60%) raised the idea of rainforest as animal habitats.

Similarly, but at a lower level, more girls (13%) than boys (5%) said that rainforests provided human habitats. These observations are generally consistent with our previous studies of pupils’ views about the use and conservation of rainforests, in which girls were shown to be more sympathetic to animals and expressed views which seem to place an intrinsic value on non-human animal life.

The fourth question concerned the causes of the destruction of rainforests. Perhaps encouragingly, more than half of the pupils (59%) identified that it is human activities which are destroying rainforests, some personalising the responsibility by the use of terms such as ‘we are’. About 18% of the pupils referred specifically to logging activity.

One misconception, expressed by some 10% of the pupils, was that acid rain is responsible for rainforest destruction; a similar proportion said that pollution is destroying rainforests. Here, children are confusing rainforest destruction with damage to the forests of Western Europe by these factors. While two fifths of the students provided the information that the rainforests provide oxygen, in some cases this response also embraced the misconception that rainforest destruction would reduce atmospheric oxygen, making the atmosphere incompatible with human life on Earth.

In answer to the final question about the importance of rainforest conservation, the majority of children simply said that we need rainforests to survive. Only a few of the pupils (6%) mentioned that rainforest destruction may contribute to global warming. This is surprising considering the high level of media coverage on this issue. Some children expressed the idea that the conservation of rainforests is not important.

The results of this study suggest that certain ideas predominate in the thinking of children about rainforests. Pupils’ responses indicate some misconceptions in basic scientific knowledge of rainforests’ ecosystems such as their ideas about rainforests as habitats for animals, plants and humans and the relationship between climatic change and destruction of rainforests.

Pupils did not volunteer ideas that suggested that they appreciated the complexity of causes of rainforest destruction. In other words, they gave no indication of an appreciation of either the range of ways in which rainforests are important or the complex social, economic and political factors which drive the activities which are destroying the rainforests. One encouragement is that the results of similar studies about other environmental issues suggest that older children seem to acquire the ability to appreciate, value and evaluate conflicting views. Environmental education offers an arena in which these skills can be developed, which is essential for these children as future decision-makers.

Old Man of the Lake

Small, localised enterprises are becoming ever-more imaginative in identifying opportunities to boost tourism for their areas. A more unusual attraction is the Old Man of the Lake, which is the name given to a 9-metre-tall tree stump that has been bobbing vertically in Oregon's Crater Lake since at least 1896. For over one hundred years, it has been largely ignored but recently it has become a must-see item on the list of lake attractions. Since January 2012, tour boats regularly include the Old Man on their sightseeing trips around the lake.

At the waterline, the stump is about 60 centimetres in diameter, and the exposed part stands approximately 120 centimetres above the surface of the water. Over the years, the stump has been bleached white by the elements. The exposed end of the floating tree is splintered and worn but wide and buoyant enough to support a person’s weight.

Observations indicated that the Old Man of Crater Lake travels quite extensively, and sometimes with surprising rapidity. Since it can be seen virtually anywhere on the lake, boat pilots commonly communicate its position to each other as a general matter of safety.

 

 

White mountain, green tourism

The French Alpine town of Chamonix has been a magnet for tourists since the 18th century. But today, tourism and climate change are putting pressure on the surrounding environment. Marc Grainger reports.

The town of Chamonix-Mont-Blanc sits in a valley at 1,035 metres above sea level in the Haute-Savoie department in south-eastern France. To the north­west are the red peaks of the Aiguilles Rouges massif; to the south-east are the permanently white peaks of Mont Blanc, which at 4,810 metres is the highest mountain in the Alps. It’s a typical Alpine environment, but one that is under increasing strain from the hustle and bustle of human activity.

Tourism is Chamonix’s lifeblood. Visitors have been encouraged to visit the valley ever since it was discovered by explorers in 1741. Over 40 years later, in 1786,

Mont Blanc’s summit was finally reached by a French doctor and his guide, and this gave birth to the sport of alpinism, with Chamonix at its centre. In 1924, it hosted the first Winter Olympics, and the cable cars and lifts that were built in the years that followed gave everyone access to the ski slopes.

Today, Chamonix is a modern town, connected to the outside world via the Mont Blanc Road Tunnel and a busy highway network. It receives up to 60,000 visitors at a time during the ski season, and climbers, hikers and extreme-sports enthusiasts swarm there in the summer in even greater numbers, swelling the town’s population to 100,000. It is the third most visited natural site in the world, according to Chamonix’s Tourism Office and, last year, it had 5.2 million visitor bed nights - all this in a town with fewer than 10,000 permanent inhabitants.

This influx of tourists has put the local environment under severe pressure, and the authorities in the valley have decided to take action. Educating visitors is vital. Tourists are warned not to drop rubbish, and there are now recycling points dotted all around the valley, from the town centre to halfway up the mountains. An internet blog reports environmental news in the town, and the ‘green’ message is delivered with all the tourist office’s activities.

Low-carbon initiatives are also important for the region. France is committed to reducing its carbon emissions by a factor of four by 2050. Central to achieving this aim is a strategy that encourages communities to identify their carbon emissions on a local level and make plans to reduce them. Studies have identified that accommodation accounts for half of all carbon emissions in the Chamonix valley. Hotels are known to be inefficient operations, but those around Chamonix are now cleaning up their act. Some are using low-energy lighting, restricting water use and making recycling bins available for guests; others have invested in huge projects such as furnishing and decorating using locally sourced materials, using geothermal energy for heating and installing solar panels.

Chamonix’s council is encouraging the use of renewable energy in private properties too, by making funds available for green renovations and new constructions. At the same time, public- sector buildings have also undergone improvements to make them more energy efficient and less wasteful. For example, the local ice rink has reduced its annual water consumption from 140,000 cubic metres to 10,000 cubic metres in the space of three years.

Improving public transport is another feature of the new policy, as 80 percent of carbon emissions from transport used to come from private vehicles. While the Mont Blanc Express is an ideal way to travel within the valley - and see some incredible scenery along the route - it is much more difficult to arrive in Chamonix from outside by rail. There is no direct line from the closest airport in Geneva, so tourists arriving by air normally transfer by car or bus. However, at a cost of 3.3 million euros a year, Chamonix has introduced a free shuttle service in order to get people out of their cars and into buses fitted with particle filters.

If the valley’s visitors and residents want to know why they need to reduce their environmental impact, they just have to look up; the effects of climate change are there for everyone to see in the melting glaciers that cling to the mountains. The fragility of the Alpine environment has long been a concern among local people. Today, 70 percent of the 805 square kilometres that comprise Chamonix-Mont-Blanc is protected in some way. But now, the impact of tourism has led the authorities to recognise that more must be done if the valley is to remain prosperous: that they must not only protect the natural environment better, but also manage the numbers of visitors better, so that its residents can happily remain there.

 

 

The Impact of Wilderness Tourism

The market for tourism In remote areas is booming as never before. Countries ail across the world are actively promoting their ‘wilderness’ regions - such as mountains, Arctic lands, deserts, small islands and wetlands - to high-spending tourists. The attraction of these areas is obvious.- by definition, wilderness tourism requires little or no initial investment. But that does not mean that there is no cost. As the 1992 United Nations Conference on Environment and Development recognized, these regions are fragile (i.e. highly vulnerable to abnormal pressures) not just in terms of their ecology, but also in terms of the culture of their inhabitants. The three most significant types of fragile environment in these respects, and also in terms of the proportion of the Earth's surface they cover, are deserts, mountains and Arctic areas. An important characteristic is their marked seasonality, with harsh conditions prevailing for many months each year. Consequently, most human activities, including tourism, are limited to quite clearly defined parts of the year.

Tourists are drawn to these regions by their natural landscape beauty and the unique cultures of their indigenous people. And poor governments in these isolated areas have welcomed the new breed of ‘adventure tourist’, grateful for the hard currency they bring. For several years now, tourism has been the prime source of foreign exchange in Nepal and Bhutan. Tourism is also a key element in the economies of Arctic zones such as Lapland and Alaska and in desert areas such as Ayers Rock in Australia and Arizona’s Monument Valley.

Once a location is established as a main tourist destination, the effects on the local community are profound. When hill-farmers, for example, can make more money in a few weeks working as porters for foreign trekkers than they can in a year working in their fields, it is not surprising that many of them give up their farm-work, which is thus left to other members of the family. In some hill-regions, this has led to a serious decline in farm output and a change in the local diet, because there is insufficient labour to maintain terraces and irrigation systems and tend to crops. The result has been that many people in these regions have turned to outside supplies of rice and other foods.

In Arctic and desert societies, year-round survival has traditionally depended on hunting animals and fish and collecting fruit over a relatively short season. However, as some inhabitants become Involved in tourism, they no longer have time to collect wild food; this has led to increasing dependence on bought food and stores. Tourism is not always the culprit behind such changes. All kinds of wage labour, or government handouts, tend to undermine traditional survival systems. Whatever the cause, the dilemma is always the same: what happens If these new, external sources of income dry up?

The physical impact of visitors is another serious problem associated with the growth In adventure tourism. Much attention has focused on erosion along major trails, but perhaps more important are the deforestation and impacts on water supplies arising from the need to provide tourists with cooked food and hot showers. In both mountains and deserts, slow-growing trees are often the main sources of fuel and water supplies may be limited or vulnerable to degradation through heavy use.

Stories about the problems of tourism have become legion in the last few years. Yet it does not have to be a problem. Although tourism inevitably affects the region in which it takes place, the costs to these fragile environments and their local cultures can be minimized. Indeed, it can even be a vehicle for reinvigorating local cultures, as has happened with the Sherpas of Nepal’s Khumbu Valley and in some Alpine villages. And a growing number of adventure tourism operators are trying to ensure that their activities benefit the local population and environment over the long term.

In the Swiss Alps, communities have decided that their future depends on integrating tourism more effectively with the local economy. Local concern about the rising number of second home developments in the Swiss Pays d'Enhaut resulted in limits being imposed on their growth. There has also been a renaissance in communal cheese production In the area, providing the locals with a reliable source of income that does not depend on outside visitors.

Many of the Arctic tourist destinations have been exploited by outside companies, who employ transient workers and repatriate most of the profits to their home base. But some Arctic communities are now operating tour businesses themselves, thereby ensuring that the benefits accrue locally. For instance, a native corporation in Alaska, employing local people. Is running an air tour from Anchorage to Kotzebue, where tourists eat Arctic food, walk on the tundra and watch local musicians and dancers.

Native people In the desert regions of the American Southwest have followed similar strategies, encouraging tourists to visit their pueblos and reservations to purchase high-quality handicrafts and artwork. The Acoma and San lldefonso pueblos have established highly profitable pottery businesses, while the Navajo and Hopi groups have been similarly successful with jewellery.

Too many people living in fragile environments have lost control over their economies, their culture and their environment when tourism has penetrated their homelands. Merely restricting tourism cannot be the solution to the imbalance, because people's desire to see new places will not just disappear. Instead, communities in fragile environments must achieve greater control over tourism ventures in their regions, in order to balance their needs and aspirations with the demands of tourism. A growing number of communities are demonstrating that, with firm communal decision-making, this is possible. The critical question now is whether this can become the norm, rather than the exception.

 

 

Tourism

 

A

Tourism, holidaymaking and travel are these days more significant social phenomena than most commentators have considered. On the face of it there could not be a more trivial subject for a book. And indeed since social scientists have had considerable difficulty explaining weightier topics, such as work or politics, it might be thought that they would have great difficulties in accounting for more trivial phenomena such as holidaymaking. However, there are interesting parallels with the study of deviance. This involves the investigation of bizarre and idiosyncratic social practices which happen to be defined as deviant in some societies but not necessarily in others. The assumption is that the investigation of deviance can reveal interesting and significant aspects of normal societies. It could be said that a similar analysis can be applied to tourism.

 

B

Tourism is a leisure activity which presupposes its opposite, namely regulated and organised work. It is one manifestation of how work and leisure are organised as separate and regulated spheres of social practice in modern societies. Indeed acting as a tourist is one of the defining characteristics of being ‘modern’ and the popular concept of tourism is that it is organised within particular places and occurs for regularised periods of time. Tourist relationships arise from a movement of people to, and their stay in, various destinations. This necessarily involves some movement, that is the journey, and a period of stay in a new place or places. ‘The journey and the stay’ are by definition outside the normal places of residence and work and are of a short term and temporary nature and there is a clear intention to return ‘home’ within a relatively short period of time.

 

C

A substantial proportion of the population of modern societies engages in such tourist practices new socialised forms of provision have developed in order to cope with the mass character of the gazes of tourists as opposed to the individual character of travel. Places are chosen to be visited and be gazed upon because there is an anticipation especially through daydreaming and fantasy of intense pleasures, either on a different scale or involving different senses from those customarily encountered. Such anticipation is constructed and sustained through a variety of non-tourist practices such as films, TV literature, magazines records and videos which construct and reinforce this daydreaming.

 

D

Tourists tend to visit features of landscape and townscape which separate them off from everyday experience. Such aspects are viewed because they are taken to be in some sense out of the ordinary. The viewing of these tourist sights often involves different forms of social patterning with a much greater sensitivity to visual elements of landscape or townscape than is normally found in everyday life. People linger over these sights in a way that they would not normally do in their home environment and the vision is objectified or captured through photographs postcards films and so on which enable the memory to be endlessly reproduced and recaptured.

 

E

One of the earliest dissertations on the subject of tourism is Boorstins analysis of the pseudo event (1964) where he argues that contemporary. Americans cannot experience reality directly but thrive on pseudo events. Isolated from the host environment and the local people the mass tourist travels in guided groups and finds pleasure in inauthentic contrived attractions gullibly enjoying the pseudo events and disregarding the real world outside. Over time the images generated of different tourist sights come to constitute a closed self-perpetuating system of illusions which provide the tourist with the basis for selecting and evaluating potential places to visit. Such visits are made says Boorstin, within the environmental bubble of the familiar American style hotel which insulates the tourist from the strangeness of the host environment.

 

F

To service the burgeoning tourist industry, an array of professionals has developed who attempt to reproduce ever-new objects for the tourist to look at. These objects or places are located in a complex and changing hierarchy. This depends upon the interplay between, on the one hand, competition between interests involved in the provision of such objects and, on the other hand changing class, gender, and generational distinctions of taste within the potential population of visitors. It has been said that to be a tourist is one of the characteristics of the modern experience. Not to go away is like not possessing a car or a nice house. Travel is a marker of status in modern societies and is also thought to be necessary for good health. The role of the professional, therefore, is to cater for the needs and tastes of the tourists in accordance with their class and overall expectations.

 

 

The Context, Meaning and Scope of Tourism

Travel has existed since the beginning of time, when primitive man set out, often traversing great distances in search of game, which provided the food and clothing necessary for his survival. Throughout the course of history, people have travelled for purposes of trade, religious conviction, economic gain, war, migration and other equally compelling motivations. In the Roman era, wealthy aristocrats and high government officials also travelled for pleasure. Seaside resorts located at Pompeii and Herculaneum afforded citizens the opportunity to escape to their vacation villas in order to avoid the summer heat of Rome. Travel, except during the Dark Ages, has continued to grow and, throughout recorded history, has played a vital role in the development of civilisations and their economies.

Tourism in the mass form as we know it today is a distinctly twentieth-century phenomenon. Historians suggest that the advent of mass tourism began in England during the industrial revolution with the rise of the middle class and the availability of relatively inexpensive transportation. The creation of the commercial airline industry following the Second World War and the subsequent development of the jet aircraft in the 1950s signalled the rapid growth and expansion of international travel. This growth led to the development of a major new industry: tourism. In turn, international tourism became the concern of a number of world governments since it not only provided new employment opportunities but also produced a means of earning foreign exchange.

Tourism today has grown significantly in both economic and social importance. In most industrialised countries over the past few years the fastest growth has been seen in the area of services. One of the largest segments of the service industry, although largely unrecognised as an entity in some of these countries, is travel and tourism. According to the World Travel and Tourism Council (1992),Travel and tourism is the largest industry in the world on virtually any economic measure including value-added capital investment, employment and tax contributions,. In 1992’ the industry’s gross output was estimated to be $3.5 trillion, over 12 per cent of all consumer spending. The travel and tourism industry is the world’s largest employer the almost 130 million jobs, or almost 7 per cent of all employees. This industry is the world’s leading industrial contributor, producing over 6 per cent of the world’s national product and accounting for capital investment in excess of $422 billion m direct indirect and personal taxes each year. Thus, tourism has a profound impact both on the world economy and, because of the educative effect of travel and the effects on employment, on society itself.

However, the major problems of the travel and tourism industry that have hidden, or obscured, its economic impact are the diversity and fragmentation of the industry itself. The travel industry includes: hotels, motels and other types of accommodation; restaurants and other food services; transportation services and facilities; amusements, attractions and other leisure facilities; gift shops and a large number of other enterprises. Since many of these businesses also serve local residents, the impact of spending by visitors can easily be overlooked or underestimated. In addition, Meis (1992) points out that the tourism industry involves concepts that have remained amorphous to both analysts and decision makers. Moreover, in all nations this problem has made it difficult for the industry to develop any type of reliable or credible tourism information base in order to estimate the contribution it makes to regional, national and global economies. However, the nature of this very diversity makes travel and tourism ideal vehicles for economic development in a wide variety of countries, regions or communities.

Once the exclusive province of the wealthy, travel and tourism have become an institutionalised way of life for most of the population. In fact, McIntosh and Goeldner (1990) suggest that tourism has become the largest commodity in international trade for many nations and, for a significant number of other countries, it ranks second or third. For example, tourism is the major source of income in Bermuda, Greece, Italy, Spain, Switzerland and most Caribbean countries. In addition, Hawkins and Ritchie, quoting from data published by the American Express Company, suggest that the travel and tourism industry is the number one ranked employer in the Bahamas, Brazil, Canada, France, (the former) West Germany, Hong Kong, Italy, Jamaica, Japan, Singapore, the United Kingdom and the United States. However, because of problems of definition, which directly affect statistical measurement, it is not possible with any degree of certainty to provide precise, valid or reliable data about the extent of world-wide tourism participation or its economic impact. In many cases, similar difficulties arise when attempts are made to measure domestic tourism.

 

 

Here today, gone tomorrow

The Arctic and Antarctica are now within reach of the modern tourist, with many going to see these icy wildernesses before it's too late. Christian Amodeo reports on the growth of polar tourism.

Travel at the North and South Poles has become an expensive leisure activity, suitable for tourists of all ages. The poles may be inhospitable places, but they are seeing increasing numbers of visitors.

Annual figures for the Arctic, where tourism has existed since the 19th century, have increased from about a million in the early 1990s to more than 1.5 million today. This is partly because of the lengthening summer season brought about by climate change.

Most visitors arrive by ship. In 2007, 370,000 cruise passengers visited Norway, twice the number that arrived in 2000. Iceland, a country where tourism is the second-largest industry, has enjoyed an annual growth rate of nine percent since 1990. Meanwhile, Alaska received some 1,029,800 passengers, a rise of 7.3 percent from 2006. Greenland has seen the most rapid growth in marine tourism, with a sharp increase in cruise-ship arrivals of 250 percent since 2004.

The global economic downturn may have affected the annual 20.6 percent rate of increase in visitors to the Antarctic - last season saw a drop of 17 percent to 38,200 - but there has been a 760 percent rise in land-based tourism there since 1997. More people than ever are landing at fragile sites, with light aircraft, helicopters and all-terrain vehicles increasingly used for greater access, while in the past two seasons, ‘fly-sail’ operations have begun. These deliver tourists by air to ships, so far more groups can enjoy a cruise in a season; large cruise ships capable of carrying up to 800 passengers are not uncommon.

In addition, it seems that a high number of visitors return to the poles. ‘Looking at six years’ worth of data, of the people who have been to the polar regions, roughly 25 percent go for a second time,’ says Louisa Richardson, a senior marketing executive at tour operator Exodus.

In the same period that tourism has exploded, the ‘health’ of the poles has ‘deteriorated’. ‘The biggest changes taking place in the Antarctic are related to climate change,’ says Rod Downie, Environmental Manager with the British Antarctic Survey (BAS). Large numbers of visitors increase these problems.

Although polar tourism is widely accepted, there have been few regulations up until recently. At the meeting of the Antarctic Treaty in Baltimore, the 28 member nations adopted proposals for limits to tourist numbers. These included safety codes for tourist vessels in Antarctic waters, and improved environmental protection for the continent. They agreed to prevent ships with more than 500 passengers from landing in Antarctica, as well as limit the number of passengers going ashore to a maximum of 100 at any one time, with a minimum of one guide for every 20 tourists. ‘Tourism in Antarctica is not without its risks,’ says Downie. After all, Antarctica doesn’t have a coastguard rescue service.’

‘So far, no surveys confirm that people are going quickly to see polar regions before they change,’ says Frigg Jorgensen, General Secretary of the Association of Arctic Expedition Cruise Operators (AECO). ‘However, Hillary Clinton and many other big names have been to Svalbard in the northernmost part of Norway to see the effects of climate change. The associated media coverage could influence others to do the same.’

These days, rarely a week passes without a negative headline in the newspapers. The suffering polar bear has become a symbol of a warming world, its plight a warning that the clock is ticking. It would seem that this ticking clock is a small but growing factor for some tourists. ‘There’s an element of “do it now”,’ acknowledges Prisca Campbell, Marketing director of Quark Expeditions, which takes 7,000 People to the poles annually. Leaving the trip until later, it seems, may mean leaving it too late.

Nurturing talent within the family

What do we mean by being ‘talented’ or ‘gifted’? The most obvious way is to look at the work someone does and if they are capable of significant success, label them as talented. The purely quantitative route - ‘percentage definition’ - looks not at individuals, but at simple percentages, such as the top five per cent of the population, and labels them - by definition - as gifted. This definition has fallen from favour, eclipsed by the advent of IQ tests, favoured by luminaries such as Professor Hans Eysenck, where a series of written or verbal tests of general intelligence leads to a score of intelligence.

The IQ test has been eclipsed in turn. Most people studying intelligence and creativity in the new millennium now prefer a broader definition, using a multifaceted approach where talents in many areas are recognised rather than purely concentrating on academic achievement. If we are therefore assuming that talented, creative or gifted individuals may need to be assessed across a range of abilities, does this mean intelligence can run in families as a genetic or inherited tendency? Mental dysfunction - such as schizophrenia - can, so is an efficient mental capacity passed on from parent to child?

Animal experiments throw some light on this question, and on the whole area of whether it is genetics, the environment or a combination of the two that allows for intelligence and creative ability. Different strains of rats show great differences in intelligence or ‘rat reasoning’. If these are brought up in normal conditions and then mn through a maze to reach a food goal, the ‘bright’ strain make far fewer wrong turns that the ‘dull’ ones. But if the environment is made dull and boring the number of errors becomes equal. Return the rats to an exciting maze and the discrepancy returns as before - but is much smaller. In other words, a dull rat in a stimulating environment will almost do as well as a bright rat who is bored in a normal one. This principle applies to humans too - someone may be born with innate intelligence, but their environment probably has the final say over whether they become creative or even a genius.

Evidence now exists that most young children, if given enough opportunities and encouragement, are able to achieve significant and sustainable levels of academic or sporting prowess. Bright or creative children are often physically very active at the same time, and so may receive more parental attention as a result - almost by default - in order to ensure their safety. They may also talk earlier, and this, in turn, breeds parental interest. This can sometimes cause problems with other siblings who may feel jealous even though they themselves may be bright. Their creative talents may be undervalued and so never come to fruition. Two themes seem to run through famously creative families as a result. The first is that the parents were able to identify the talents of each child, and nurture and encourage these accordingly but in an even-handed manner. Individual differences were encouraged, and friendly sibling rivalry was not seen as a particular problem. If the father is, say, a famous actor, there is no undue pressure for his children to follow him onto the boards, but instead their chosen interests are encouraged. There need not even by any obvious talent in such a family since there always needs to be someone who sets the family career in motion, as in the case of the Sheen acting dynasty.

Martin Sheen was the seventh of ten children born to a Spanish immigrant father and an Irish mother. Despite intense parental disapproval he turned his back on entrance exams to university and borrowed cash from a local priest to start a fledgling acting career. His acting successes in films such as Badlands and Apocalypse Now made him one of the most highly-regarded actors of the 1970s. Three sons - Emilio Estevez, Ramon Estevez and Charlie Sheen - have followed him into the profession as a consequence of being inspired by his motivation and enthusiasm.

A stream seems to run through creative families. Such children are not necessarily smothered with love by their parents. They feel loved and wanted, and are secure in their home, but are often more surrounded by an atmosphere of work and where following a calling appears to be important. They may see from their parents that it takes time and dedication to be master of a craft, and so are in less of a hurry to achieve for themselves once they start to work.

The generation of creativity is complex: it is a mixture of genetics, the environment, parental teaching and luck that determines how successful or talented family members are. This last point - luck - is often not mentioned where talent is concerned but plays an undoubted part. Mozart, considered by many to be the finest composer of all time, was lucky to be living in an age that encouraged the writing of music. He was brought up surrounded by it, his father was a musician who encouraged him to the point of giving up his job to promote his child genius, and he learnt musical composition with frightening speed - the speed of a genius. Mozart himself simply wanted to create the finest music ever written but did not necessarily view himself as a genius - he could write sublime music at will, and so often preferred to lead a hedonistic lifestyle that he found more exciting than writing music to order.

Albert Einstein and Bill Gates are two more examples of people whose talents have blossomed by virtue of the times they were living in. Einstein was a solitary, somewhat slow child who had affection at home but whose phenomenal intelligence emerged without any obvious parental input. This may have been partly due to the fact that at the start of the 20th Century a lot of the Newtonian laws of physics were being questioned, leaving a fertile ground for ideas such as his to be developed. Bill Gates may have had the creative vision to develop Microsoft, but without the new computer age dawning at the same time he may never have achieved the position on the world stage he now occupies.

 

 

Greying population stays in the pink

Elderly people are growing healthier, happier and more independent, say American scientists. The results of a 14-year study to be announced later this month reveal that the diseases associated with old age are afflicting fewer and fewer people and when they do strike, it is much later in life.

In the last 14 years, the National Long-term Health Care Survey has gathered data on the health and lifestyles of more than 20,000 men and women over 65. Researchers, now analysing the results of data gathered in 1994, say arthritis, high blood pressure and circulation problems - the major medical complaints in this age group - are troubling a smaller proportion every year. And the data confirms that the rate at which these diseases are declining continues to accelerate. Other diseases of old age - dementia, stroke, arteriosclerosis and emphysema - are also troubling fewer and fewer people.

'It really raises the question of what should be considered normal ageing,' says Kenneth Manton, a demographer from Duke University in North Carolina. He says the problems doctors accepted as normal in a 65-year-old in 1982 are often not appearing until people are 70 or 75.

Clearly, certain diseases are beating a retreat in the face of medical advances. But there may be other contributing factors. Improvements in childhood nutrition in the first quarter of the twentieth century, for example, gave today's elderly people a better start in life than their predecessors.

On the downside, the data also reveals failures in public health that have caused surges in some illnesses. An increase in some cancers and bronchitis may reflect changing smoking habits and poorer air quality, say the researchers. 'These may be subtle influences,' says Manton, 'but our subjects have been exposed to worse and worse pollution for over 60 years. It's not surprising we see some effect.'

One interesting correlation Manton uncovered is that better-educated people are likely to live longer. For example, 65-year-old women with fewer than eight years of schooling are expected, on average, to live to 82. Those who continued their education live an extra seven years. Although some of this can be attributed to a higher income, Manton believes it is mainly because educated people seek more medical attention.

The survey also assessed how independent people over 65 were, and again found a striking trend. Almost 80% of those in the 1994 survey could complete everyday activities ranging from eating and dressing unaided to complex tasks such as cooking and managing their finances. That represents a significant drop in the number of disabled old people in the population. If the trends apparent in the United States 14 years ago had continued,

researchers calculate there would be an additional one million disabled elderly people in today's population. According to Manton, slowing the trend has saved the United States government's Medicare system more than $200 billion, suggesting that the greying of America's population may prove less of a financial burden than expected.

The increasing self-reliance of many elderly people is probably linked to a massive increase in the use of simple home medical aids. For instance, the use of raised toilet seats has more than doubled since the start of the study, and the use of bath seats has grown by more than 50%. These developments also bring some health benefits, according to a report from the MacArthur Foundation's research group on successful ageing. The group found that those elderly people who were able to retain a sense of independence were more likely to stay healthy in old age.

Maintaining a level of daily physical activity may help mental functioning, says Carl Cotman, a neuroscientist at the University of California at Irvine. He found that rats that exercise on a treadmill have raised levels of brain-derived neurotrophic factor coursing through their brains. Cotman believes this hormone, which keeps neurons functioning, may prevent the brains of active humans from deteriorating.

As part of the same study, Teresa Seeman, a social epidemiologist at the University of Southern California in Los Angeles, found a connection between self-esteem and stress in people over 70. In laboratory simulations of challenging activities such as driving, those who felt in control of their lives pumped out lower levels of stress hormones such as cortisol. Chronically high levels of these hormones have been linked to heart disease.

But independence can have drawbacks. Seeman found that elderly people who felt emotionally isolated maintained higher levels of stress hormones even when asleep. The research suggests that older people fare best when they feel independent but know they can get help when they need it.

'Like much research into ageing, these results support common sense,' says Seeman. They also show that we may be underestimating the impact of these simple factors. 'The sort of thing that your grandmother always told you turns out to be right on target,' she says.

Painters of time

'The world's fascination with the mystique of Australian Aboriginal art.'

Emmanuel de Roux
 

The works of Aboriginal artists are now much in demand throughout the world, and not just in Australia, where they are already fully recognised: the National Museum of Australia, which opened in Canberra in 2001, designated 40% of its exhibition space to works by Aborigines. In Europe their art is being exhibited at a museum in Lyon. France, while the future Quai Branly museum in Paris

-      which will be devoted to arts and civilisations of Africa. Asia, Oceania and the Americas

-      plans to commission frescoes by artists from Australia.

Their artistic movement began about 30 years ago. but its roots go back to time immemorial. All the works refer to the founding myth of the Aboriginal culture, ‘the Dreaming’. That internal geography, which is rendered with a brush and colours, is also the expression of the Aborigines' long quest to regain the land which was stolen from them when Europeans arrived in the nineteenth century. ‘Painting is nothing without history.' says one such artist. Michael Nelson Tjakamarra.

There arc now fewer than 400.000 Aborigines living in Australia. They have been swamped by the country's 17.5 million immigrants. These original ‘natives' have been living in Australia for 50.000 years, but they were undoubtedly maltreated by the newcomers. Driven back to the most barren lands or crammed into slums on the outskirts of cities, the Aborigines were subjected to a policy of ‘assimilation’, which involved kidnapping children to make them better ‘integrated' into European society, and herding the nomadic Aborigines by force into settled communities.

It was in one such community, Papunya, near Alice Springs, in the central desert, that Aboriginal painting first came into its own. In 1971, a white schoolteacher. Geoffrey Bardon, suggested to a group of Aborigines that they should decorate the school walls with ritual motifs. so as to pass on to the younger generation the myths that were starting to fade from their collective memory, lie gave them brushes.

He was astounded by the result. But their art did not come like a bolt from the blue: for thousands of years Aborigines had been ‘painting' on the ground using sands of different colours, and on rock faces. They had also been decorating their bodies for ceremonial purposes. So there existed a formal vocabulary.

This had already been noted by Europeans. In the early twentieth century. Aboriginal communities brought together by missionaries in northern Australia had been encouraged to reproduce on tree bark the motifs found on rock faces. Artists turned out a steady stream of works, supported by the churches, which helped to sell them to the public, and between 1950 and I960 Aboriginal paintings began to reach overseas museums. Painting on bark persisted in the north, whereas the communities in the central desert increasingly used acrylic paint, and elsewhere in Western Australia women explored the possibilities of wax painting and dyeing processes, known as ‘batik’.

What Aborigines depict are always elements of the Dreaming, the collective history that each community is both part of and guardian of. I he Dreaming is the story of their origins, of their ‘Great Ancestors’, who passed on their knowledge, their art and their skills (hunting, medicine, painting, music and dance) to man. ‘The Dreaming is not synonymous with the moment when the world was created.’ says Stephane Jacob, one of the organisers of the Lyon exhibition. ‘For Aborigines, that moment has never ceased to exist. It is perpetuated by the cycle of the seasons and the religious ceremonies which the Aborigines organise. Indeed the aim of those ceremonies is also to ensure the permanence of that golden age. The central function of Aboriginal painting, even in its contemporary manifestations, is to guarantee the survival of this world. The Dreaming is both past, present and future.'

Each work is created individually, with a form peculiar to each artist, but it is created within and on behalf of a community who must approve it. An artist cannot use a 'dream' that does not belong to his or her community, since each community is the owner of its dreams, just as it is anchored to a territory marked out by its ancestors, so each painting can be interpreted as a kind of spiritual road map for that community.

Nowadays, each community is organised as a cooperative and draws on the services of an art adviser, a government-employed agent who provides the artists with materials, deals with galleries and museums and redistributes the proceeds from sales among the artists.

Today, Aboriginal painting has become a great success. Some works sell for more than $25,000, and exceptional items may fetch as much as $180,000 in Australia.

'By exporting their paintings as though they were surfaces of their territory, by accompanying them to the temples of western art. the Aborigines have redrawn the map of their country, into whose depths they were exiled,* says Yves Le Fur. of the Quai Branlv museum. ‘Masterpieces have been created. Their undeniable power prompts a dialogue that has proved all too rare in the history of contacts between the two cultures’.

 

 

Early occupations around the river Thames

In her pioneering survey, Sources of London English, Laura Wright has listed the variety of medieval workers who took their livings from the river Thames. The baillies of Queenhithe and Billingsgate acted as customs officers. There were conservators, who were responsible for maintaining the embankments and the weirs, and there were the garthmen who worked in the fish garths (enclosures). Then there were galleymen and lightermen and shoutmen, called after the names of their boats, and there were hookers who were named after the manner in which they caught their fish. The searcher patrolled the Thames in search of illegal fish weirs, and the tideman worked on its banks and foreshores whenever the tide permitted him to do so.

All of these occupations persisted for many centuries, as did those jobs that depended upon the trade of the river. Yet, it was not easy work for any of the workers. They carried most goods upon their backs, since the rough surfaces of the quays and nearby streets were not suitable for wagons or large carts; the merchandise characteristically arrived in barrels which could be rolled from the ship along each quay. If the burden was too great to be carried by a single man, then the goods were slung on poles resting on the shoulders of two men. It was a slow and expensive method of business.

However, up to the eighteenth century, river work was seen in a generally favourable light. For Langland, writing in the fourteenth century, the labourers working on river merchandise were relatively prosperous. And the porters of the seventeenth and early eighteenth centuries were, if anything, aristocrats of labour, enjoying high status. However, in the years from the late eighteenth to the early nineteenth century, there was a marked change in attitude. This was in part because the working river was within the region of the East End of London, which in this period acquired an unenviable reputation. By now, dockside labour was considered to be the most disreputable, and certainly the least desirable form of work.

It could be said that the first industrial community in England grew up around the Thames. With the host of river workers themselves, as well as the vast assembly of ancillary trades such as tavern-keepers and laundresses, food-sellers and street-hawkers, shopkeepers and marine store dealers - there was a workforce of many thousands congregated in a relatively small area. There were more varieties of business to be observed by the riverside than ,in any other part of the city. As a result, with the possible exception of the area known as Seven Dials, the East End was also the most intensively inhabited region of London.

It was a world apart, with its own language and its own laws. From the sailors in the opium dens of Limehouse to the smugglers on the malarial flats of the estuary, the workers of the river were not part of any civilised society. The alien world of the river had entered them. That alienation was also expressed in the slang of the docks, which essentially amounted to backslang, or the reversal of ordinary words. This backslang also helped in the formulation of Cockney rhyming slang*, so that the vocabulary of Londoners was directly'affected by the life of the Thames.

The reports in the nineteenth-century press reveal a heterogeneous world of dock labour, in which the crowds of casuals waiting for work at the dock gates at 7.45 a.m. include penniless refugees, bankrupts, old soldiers, broken-down gentlemen, discharged servants, and ex-convicts. There were some 400-500 permanent workers who earned a regular wage and who were considered to be the patricians of dockside labour. However, there were some 2,500 casual workers who were hired by the shift. The work for which they competed fiercely had become ever more unpleasant. Steam power could not be used for the cranes, for example, because of the danger of fire. So the cranes were powered by treadmills. Six to eight men entered a wooden cylinder and, laying hold of ropes, would tread the wheel round. They could lift nearly 20 tonnes to an average height of 27 feet (8.2 metres), forty times in an hour. This was part of the life of the river unknown to those who were intent upon its more picturesque aspects.

 

 

Migratory Beekeeping

Taking Wing

To eke out a full-time living from their honeybees, about half the nation’s 2,000 commercial beekeepers pull up stakes each spring, migrating north to find more flowers for their bees. Besides turning floral nectar into honey, these hardworking insects also pollinate crops for farmers -for a fee. As autumn approaches, the beekeepers pack up their hives and go south, scrambling for pollination contracts in hot spots like California’s fertile Central Valley.

Of the 2,000 commercial beekeepers in the United States about half migrate This pays off in two ways Moving north in the summer and south in the winter lets bees work a longer blooming season, making more honey — and money — for their keepers. Second, beekeepers can carry their hives to farmers who need bees to pollinate their crops. Every spring a migratory beekeeper in California may move up to 160 million bees to flowering fields in Minnesota and every winter his family may haul the hives back to California, where farmers will rent the bees to pollinate almond and cherry trees.

Migratory beekeeping is nothing new. The ancient Egyptians moved clay hives, probably on rafts, down the Nile to follow the bloom and nectar flow as it moved toward Cairo. In the 1880s North American beekeepers experimented with the same idea, moving bees on barges along the Mississippi and on waterways in Florida, but their lighter, wooden hives kept falling into the water. Other keepers tried the railroad and horse- drawn wagons, but that didn’t prove practical. Not until the 1920s when cars and trucks became affordable and roads improved, did migratory beekeeping begin to catch on.

For the Californian beekeeper, the pollination season begins in February. At this time, the beehives are in particular demand by farmers who have almond groves; they need two hives an acre. For the three-week long bloom, beekeepers can hire out their hives for $32 each. It’s a bonanza for the bees too. Most people consider almond honey too bitter to eat so the bees get to keep it for themselves.

By early March it is time to move the bees. It can take up to seven nights to pack the 4,000 or so hives that a beekeeper may own. These are not moved in the middle of the day because too many of the bees would end up homeless. But at night, the hives are stacked onto wooden pallets, back-to-back in sets of four, and lifted onto a truck. It is not necessary to wear gloves or a beekeeper’s veil because the hives are not being opened and the bees should remain relatively quiet. Just in case some are still lively, bees can be pacified with a few puffs of smoke blown into each hive’s narrow entrance.

In their new location, the beekeeper will pay the farmer to allow his bees to feed in such places as orange groves. The honey produced here is fragrant and sweet and can be sold by the beekeepers. To encourage the bees to produce as much honey as possible during this period, the beekeepers open the hives and stack extra boxes called supers on top. These temporary hive extensions contain frames of empty comb for the bees to fill with honey. In the brood chamber below, the bees will stash honey to eat later. To prevent the queen from crawling up to the top and laying eggs, a screen can be inserted between the brood chamber and the supers. Three weeks later the honey can be gathered.

Foul smelling chemicals are often used to irritate the bees and drive them down into the hive’s bottom boxes, leaving the honey- filled supers more or less bee free. These can then be pulled off the hive. They are heavy with honey and may weigh up to 90 pounds each. The supers are taken to a warehouse. In the extracting room, the frames are lilted out and lowered into an “uncapper” where rotating blades shave away the wax that covers each cell. The uncapped frames are put in a carousel that sits on the bottom of a large stainless steel drum. The carousel is filled to capacity with 72 frames. A switch is flipped and the frames begin to whirl at 300 revolutions per minute; centrifugal force throws the honey out of the combs. Finally the honey is poured into barrels for shipment.

After this, approximately a quarter of the hives weakened by disease, mites, or an ageing or dead queen, will have to be replaced. To create new colonies, a healthy double hive, teeming with bees, can be separated into two boxes. One half will hold the queen and a young, already mated queen can be put in the other half, to make two hives from one. By the time the flowers bloom, the new queens will be laying eggs, filling each hive with young worker bees. The beekeeper’s family will then migrate with them to their summer location.

Adapted from “America's Beekeepers: Hives for Hire” by Alan Mairson, National Geographic.

Volcanoes-earth-shattering news

When Mount Pinatubo suddenly erupted on 9 June 1991, the power of volcanoes past and present again hit the headlines 

Volcanoes are the ultimate earth-moving machinery. A violent eruption can blow the top few kilometres off a mountain, scatter fine ash practically all over the globe and hurl rock fragments into the stratosphere to darken the skies a continent away.

But the classic eruption - cone-shaped mountain, big bang, mushroom cloud and surges of molten lava - is only a tiny part of a global story. Vulcanism, the name given to volcanic processes, really has shaped the world. Eruptions have rifted continents, raised mountain chains, constructed islands and shaped the topography of the earth. The entire ocean floor has a basement of volcanic basalt.

Volcanoes have not only made the continents, they are also thought to have made the world's first stable atmosphere and provided all the water for the oceans, rivers and ice-caps. There are now about 600 active volcanoes. Every year they add two or three cubic kilometres of rock to the continents. Imagine a similar number of volcanoes smoking away for the last 3,500 million years. That is enough rock to explain the continental crust.

What comes out of volcanic craters is mostly gas. More than 90% of this gas is water vapour from the deep earth: enough to explain, over 3,500 million years, the water in the oceans. The rest of the gas is nitrogen, carbon dioxide, sulphur dioxide, methane, ammonia and hydrogen. The quantity of these gases, again multiplied over 3,500 million years, is enough to explain the mass of the world's atmosphere. We are alive because volcanoes provided the soil, air and water we need.

Geologists consider the earth as having a molten core, surrounded by a semi-molten mantle and a brittle, outer skin. It helps to think of a soft-boiled egg with a runny yolk, a firm but squishy white and a hard shell. If the shell is even slightly cracked during boiling, the white material bubbles out and sets like a tiny mountain chain over the crack - like an archipelago of volcanic islands such as the Hawaiian Islands. But the earth is so much bigger and the mantle below is so much hotter.

Even though the mantle rocks are kept solid by overlying pressure, they can still slowly 'flow' like thick treacle. The flow, thought to be in the form of convection currents, is powerful enough to fracture the 'eggshell' of the crust into plates, and keep them bumping and grinding against each other, or even overlapping, at the rate of a few centimetres a year. These fracture zones, where the collisions occur, are where earthquakes happen. And, very often, volcanoes.

These zones are lines of weakness, or hot spots. Every eruption is different, but put at its simplest, where there are weaknesses, rocks deep in the mantle, heated to 1,350°C, will start to expand and rise. As they do so, the pressure drops, and they expand and become liquid and rise more swiftly.

Sometimes it is slow: vast bubbles of magma - molten rock from the mantle - inch towards the surface, cooling slowly, to snow through as granite extrusions (as on Skye, or the Great Whin Sill, the lava dyke squeezed out like toothpaste that carries part of Hadrian's Wall in northern England). Sometimes - as in Northern Ireland, Wales and the Karoo in South Africa - the magma rose faster, and then flowed out horizontally on to the surface in vast thick sheets. In the Deccan plateau in western India, there are more than two million cubic kilometres of lava, some of it 2,400 metres thick, formed over 500,000 years of slurping eruption.

Sometimes the magma moves very swiftly indeed. It does not have time to cool as it surges upwards. The gases trapped inside the boiling rock expand suddenly, the lava glows with heat, it begins to froth, and it explodes with tremendous force. Then the slightly cooler lava following it begins to flow over the lip of the crater. It happens on Mars, it happened on the moon, it even happens on some of the moons of Jupiter and Uranus. By studying the evidence, vulcanologists can read the force of the great blasts of the past. Is the pumice light and full of holes? The explosion was tremendous. Are the rocks heavy, with huge crystalline basalt shapes, like the Giant's Causeway in Northern Ireland? It was a slow, gentle eruption.

The biggest eruptions are deep on the mid-ocean floor, where new lava is forcing the continents apart and widening the Atlantic by perhaps five centimetres a year. Look at maps of volcanoes, earthquakes and island chains like the Philippines and Japan, and you can see the rough outlines of what are called tectonic plates - the plates which make up the earth's crust and mantle. The most dramatic of these is the Pacific 'ring of fire' where there have been the most violent explosions - Mount Pinatubo near Manila, Mount St Helen's in the Rockies and El Chichon in Mexico about a decade ago, not to mention world-shaking blasts like Krakatoa in the Sunda Straits in 1883.

But volcanoes are not very predictable. That is because geological time is not like human time. During quiet periods, volcanoes cap themselves with their own lava by forming a powerful cone from the molten rocks slopping over the rim of the crater; later the lava cools slowly into a huge, hard, stable plug which blocks any further eruption until the pressure below becomes irresistible. In the case of Mount Pinatubo, this took 600 years.

Then, sometimes, with only a small warning, the mountain blows its top. It did this at Mont Pelee in Martinique at 7.49 a.m. on 8 May, 1902. Of a town of 28,000, only two people survived. In 1 815, a sudden blast removed the top 1,280 metres of Mount Tambora in Indonesia. The eruption was so fierce that dust thrown into the stratosphere darkened the skies, cancelling the following summer in Europe and North America. Thousands starved as the harvests failed, after snow in June and frosts in August. Volcanoes are potentially world news, especially the quiet ones.

 

 

RISING SEA

INCREASED TEMPERATURES

The average air temperature at the surface of the earth has risen this century, as has the temperature of ocean surface waters. Because water expands as it heats, a warmer ocean means higher sea levels. We cannot say definitely that the temperature rises are due to the greenhouse effect; the heating may be part of a “natural” variability over a long time-scale that we have not yet recognized I our short 100 years of recording. However, assuming the build up of greenhouse gases is responsible, and that the warming will continue. Scientists and inhabitants of low-lying coastal areas would like to know the extent of future sea level rises.

Calculating this is not easy. Models used for the purpose have treated the oceans as passive, stationary and one-dimensional. Scientists have assumed that heat simply diffused into the sea from the atmosphere. Using basic physical laws, they then predict how much a known volume of water would expand for a given increase in temperature. But the oceans are not one-dimensional, and recent work by oceanographers, using a new model which takes into account a number of subtle facets of the sea-including vast and complex ocean currents-suggests that the rise in sea level may be less than some earlier estimates had predicted.

An international forum on climate change, in 1986, produced figures for likely sea-level rises of 20 cm and 1.4 m, corresponding to atmospheric temperature increases of 1.5 and 4.5C respectively. Some scientists estimate that the ocean warming resulting from those temperature increases by the year 2050 would raise the sea level by between 10 cm and 40 cm. This model only takes into account the temperature effect on the oceans; it does not consider changes in sea level brought about by the melting of ice sheets and glaciers, and changes in groundwater storage. When we add on estimates of these, we arrive at figures for total sea-level rises of 15 cm and 70 cm respectively.

It’s not easy trying to model accurately the enormous complexities of the ever-changing oceans, with their great volume, massive currents and sensitively to the influence of land masses and the atmosphere. For example, consider how heat enters the ocean. Does it just “diffuse” from the warmer air vertically into the water, and heat only the surface layer of the sea? (Warm water is less dense than cold, so it would not spread downwards). Conventional models of sea-level rise have considered that this the only method, but measurements have shown that the rate of heat transfer into the ocean by vertical diffusion is far lower in practice than the figures that many models have adopted.

Much of the early work, for simplicity, ignored the fact that water in the oceans moves in three dimensions. By movement, of course, scientists don’t mean waves, which are too small individually to consider, but rather movement of vast volumes of water in huge currents. To understand the importance of this, we now need to consider another process-advection. Imagine smoke rising from a chimney. On a still day it will slowly spread out in all directions by means of diffusion. With a strong directional wind, however, it will all shift downwind, this process is advection-the transport of properties (notably heat and salinity in ocean) by the movement of bodies of air or water, rather than by conduction or diffusion.

Massive oceans current called gyres do the moving. These currents have far more capacity to store heat than does the atmosphere. Indeed, just the top 3 m of the ocean contains more heat than the whole of the atmosphere. The origin of the gyres lies in the fact that more heat from the Sun reaches the Equator than the Poles, and naturally heat trends to move from the former to the latter. Warm air rises at the Equator, and draws more air beneath it in the form of winds (the “Trade Winds") that, together with other air movements, provide the main force driving the ocean currents.

Water itself is heated at the Equator and moves poleward, twisted by the Earth’s rotation and affected by the positions of the continents. The resultant broadly circular movements between about 10 and 40 ' North and South are clockwise in the Southern Hemisphere. They flow towards the east at mind latitudes in the equatorial region. They then flow towards the Poles, along the eastern sides of continents, as warm currents. When two different masses of water meet, once will move beneath the other, depending on their relative densities in the subduction process. The densities are determined by temperature and salinity. The convergence of water of different densities from the Equator and the Poles deep in the oceans causes continuous subduction. This means that water moves vertically as well as horizontally. Cold water from the Poles travels as depth-it is denser than warm water-until it emerges at the surface in another part of the world in the form of a cold current.

HOW THE GREENHOUSE EFFECTS WILL CHANGE OCEAN TEMPERATURES

Ocean currents, in three dimensions, from a giant “conveyor belt”, distributing heat from the thin surface layer into the interior of the oceans and around the globe. Water may take decades to circulate in these 3-D gyres in the lop kilometer of the ocean, and centuries in the deep water. With the increased atmospheric temperatures due to the greenhouse effect, the oceans conveyor belt will carry more heat into the interior. This subduction moves heat around far more effectively than simple diffusion. Because warm water expands more than cold when it is heated, scientists had presumed that the sea level would rise unevenly around the globe. It is now believed that these inequalities cannot persist, as winds will act to continuously spread out the water expansion. Of course, of global warming changes the strength and distribution of the winds, then this “evening-out” process may not occur, and the sea level could rise more in some areas than others.

 

 

The megafires of California

Drought, housing expansion, and oversupply of tinder make for bigger, hotter fires in the western United States

Wildfires are becoming an increasing menace in the western United States, with Southern California being the hardest hit area. There's a reason fire squads battling more frequent blazes in Southern California are having such difficulty containing the flames, despite better preparedness than ever and decades of experience fighting fires fanned by the ‘Santa Ana Winds’. The wildfires themselves, experts say, are generally hotter, faster, and spread more erratically than in the past.

Megafires, also called ‘siege fires’, are the increasingly frequent blazes that burn 500,000 acres or more - 10 times the size of the average forest fire of 20 years ago. Some recent wildfires are among the biggest ever in California in terms of acreage burned, according to state figures and news reports.

One explanation for the trend to more superhot fires is that the region, which usually has dry summers, has had significantly below normal precipitation in many recent years. Another reason, experts say, is related to the century- long policy of the US Forest Service to stop wildfires as quickly as possible.

The unintentional consequence has been to halt the natural eradication of underbrush, now the primary fuel for megafires.

Three other factors contribute to the trend, they add. First is climate change, marked by a 1-degree Fahrenheit rise in average yearly temperature across the western states. Second is fire seasons that on average are 78 days longer than they were 20 years ago. Third is increased construction of homes in wooded areas.

‘We are increasingly building our homes in fire-prone ecosystems,’ says Dominik Kulakowski, adjunct professor of biology at Clark University Graduate School of Geography in Worcester, Massachusetts. ‘Doing that in many of the forests of the western US is like building homes on the side of an active volcano.'

In California, where population growth has averaged more than 600,000 a year for at least a decade, more residential housing is being built. ‘What once was open space is now residential homes providing fuel to make fires burn with greater intensity,’ says Terry McHale of the California Department of Forestry firefighters' union. ‘With so much dryness, so many communities to catch fire, so many fronts to fight, it becomes an almost incredible job.'

That said, many experts give California high marks for making progress on preparedness in recent years, after some of the largest fires in state history scorched thousands of acres, burned thousands of homes, and killed numerous people. Stung in the past by criticism of bungling that allowed fires to spread when they might have been contained, personnel are meeting the peculiar challenges of neighborhood - and canyon- hopping fires better than previously, observers say.

State promises to provide more up-to-date engines, planes, and helicopters to fight fires have been fulfilled. Firefighters’ unions that in the past complained of dilapidated equipment, old fire engines, and insufficient blueprints for fire safety are now praising the state's commitment, noting that funding for firefighting has increased, despite huge cuts in many other programs. ‘We are pleased that the current state administration has been very proactive in its support of us, and [has] come through with budgetary support of the infrastructure needs we have long sought,' says Mr. McHale of the firefighters’ union.

Besides providing money to upgrade the fire engines that must traverse the mammoth state and wind along serpentine canyon roads, the state has invested in better command-and-control facilities as well as in the strategies to run them. ‘In the fire sieges of earlier years, we found that other jurisdictions and states were willing to offer mutual-aid help, but we were not able to communicate adequately with them,’ says Kim Zagaris, chief of the state's Office of Emergency Services Fire and Rescue Branch.

After a commission examined and revamped communications procedures, the statewide response ‘has become far more professional and responsive,’ he says. There is a sense among both government officials and residents that the speed, dedication, and coordination of firefighters from several states and jurisdictions are resulting in greater efficiency than in past ‘siege fire’ situations.

In recent years, the Southern California region has improved building codes, evacuation procedures, and procurement of new technology. ‘I am extraordinarily impressed by the improvements we have witnessed,’ says Randy Jacobs, a Southern California- based lawyer who has had to evacuate both his home and business to escape wildfires. ‘Notwithstanding all the damage that will continue to be caused by wildfires, we will no longer suffer the loss of life endured in the past because of the fire prevention and firefighting measures that have been put in place,’ he says.

 

 

Disappearing Delta

The fertile land of the Nile delta is being eroded along Egypt's Mediterranean coast at an astounding rate, in some parts estimated at 100 metres per year. In the past, land scoured away from the coastline by the currents of the Mediterranean Sea used to be replaced by sediment brought down to the delta by the River Mile, but this is no longer happening.

Up to now, people have blamed this loss of delta land on the two large dams at Aswan in the south of Egypt, which hold back virtually all of the sediment that used to flow down the river. Before the dams were built, the Nile flowed freely, carrying huge quantities of sediment north from Africa's interior to be deposited on the Nile delta. This continued for 7,000 years, eventually covering a region of over 22,000 square kilometres with layers of fertile silt. Annual flooding brought in new, nutrient-rich soil to the delta region, replacing what had been washed away by the sea, and dispensing with the need for fertilizers in Egypt's richest food-growing area But when the Aswan dams were constructed in the 20th century to provide electricity and irrigation, and to protect the huge population centre of Cairo and its surrounding areas from annual flooding and drought, most of the sediment with its natural fertilizer accumulated up above the dam in the southern, upstream half of Lake Nasser, instead of passing down to the delta.

Now, however, there turns out to be more to the story It appears that the sediment-free water emerging from the Aswan dams picks up silt and land as it erodes the river bed and banks on the 800-kilometre trip to Cairo. Daniel Jean Stanley of the Smithsonian Institute noticed that water samples taken in Cairo, just before the river enters the delta, indicated that the river sometimes carries more than 850 grams of sediment per cubic metre of water - almost half of what it carried before the dams were built.

'I'm ashamed to say that the significance of this didn't strike me until after I had read 50 or 60 studies,' says Stanley in Marine Geology. 'There is still a lot of sediment coming into the delta, but virtually no sediment comes out into the Mediterranean to replenish the coastline.

So this sediment must be trapped on the delta itself.' 

Once north of Cairo, most of the Nile water is diverted into more than 10,000 kilometres of irrigation canals and only o small proportion reaches the sea directly through the rivers in the delta. The water in the irrigation canals is still or very slow-moving and thus cannot carry sediment, Stanley explains. The sediment sinks to the bottom of the canals and then is added to fields by farmers or pumped with the water into the four large freshwater lagoons that are located near the outer edges of the delta. So very little of it actually reaches the coastline to replace what is being washed away by the Mediterranean currents.

The farms on the delta plains and fishing and aquaculture in the lagoons account for much of Egypt's food supply. But by the lime the sediment has come to rest in the fields and lagoons it is laden with municipal, industrial and agricultural waste from the Cairo region, which is home to more than 40 million people. 'Pollutants are building up faster and faster,' says Stanley.

Based on his investigations of sediment from the delta lagoons, Frederic Siegel of George Washington University concurs. 'In Manzalah Lagoon, for example, the increase in mercury, lead, copper and zinc coincided with the building of the High Dam at Aswan, the availability of cheap electricity, and the development of major power-based industries/ he says. Since that time the concentration of mercury has increased significantly. Lead from engines that use leaded fuels and from other industrial sources has also increased dramatically. These poisons can easily enter the food chain, affecting the productivity of fishing and farming. Another problem is that agricultural wastes include fertilizers which stimulate increases in plant growth in the lagoons and upset the ecology of the area, with serious effects on the fishing industry.

According to Siegel, international environmental organisations are beginning to pay closer attention to the region, partly because of the problems of erosion and pollution of the Nile delta, but principally because they fear the impact this situation could have on the whole Mediterranean coastal ecosystem. But there are no easy solutions. In the immediate future, Stanley believes that one solution would be to make artificial floods to flush out the delta waterways, in the same way that natural floods did before the construction of the dams. He says, however, that in the long term an alternative process such as desalination may have to be used to increase the amount of water available. 'In my view, Egypt must devise a way to have more water running through the river and the delta/ says Stanley. Easier said than done in a desert region with a rapidly growing population.

 

 

Climate Change and the Inuit

The threat posed by climate change in the Arctic and the problems faced by Canada's Inuit people

Unusual incidents are being reported across the Arctic. Inuit families going off on snowmobiles to prepare their summer hunting camps have found themselves cut off from home by a sea of mud, following early thaws. There are reports of igloos losing their insulating properties as the snow drips and refreezes, of lakes draining into the sea as permafrost melts, and sea ice breaking up earlier than usual, carrying seals beyond the reach of hunters. Climate change may still be a rather abstract idea to most of us, but in the Arctic it is already having dramatic effects - if summertime ice continues to shrink at its present rate, the Arctic Ocean could soon become virtually ice-free in summer. The knock-on effects are likely to include more warming, cloudier skies, increased precipitation and higher sea levels. Scientists are increasingly keen to find out what's going on because they consider the Arctic the 'canary in the mine' for global warming - a warning of what's in store for the rest of the world.

For the Inuit the problem is urgent. They live in precarious balance with one of the toughest environments on earth. Climate change, whatever its causes, is a direct threat to their way of life. Nobody knows the Arctic as well as the locals, which is why they are not content simply to stand back and let outside experts tell them what's happening. In Canada, where the Inuit people are jealously guarding their hard-won autonomy in the country's newest territory, Nunavut, they believe their best hope of survival in this changing environment lies in combining their ancestral knowledge with the best of modern science. This is a challenge in itself.

The Canadian Arctic is a vast, treeless polar desert that's covered with snow for most of the year. Venture into this terrain and you get some idea of the hardships facing anyone who calls this home. Farming is out of the question and nature offers meagre pickings. Humans first settled in the Arctic a mere 4,500 years ago, surviving by exploiting sea mammals and fish. The environment tested them to the limits: sometimes the colonists were successful, sometimes they failed and vanished. But around a thousand years ago, one group emerged that was uniquely well adapted to cope with the Arctic environment. These Thule people moved in from Alaska, bringing kayaks, sleds, dogs, pottery and iron tools. They are the ancestors of today's Inuit people.

Life for the descendants of the Thule people is still harsh. Nunavut is 1.9 million square kilometres of rock and ice, and a handful of islands around the North Pole. It's currently home to 2,500 people, all but a handful of them indigenous Inuit. Over the past 40 years, most have abandoned their nomadic ways and settled in the territory's 28 isolated communities, but they still rely heavily on nature to provide food and clothing. Provisions available in local shops have to be flown into Nunavut on one of the most costly air networks in the world, or brought by supply ship during the few ice-free weeks of summer. It would cost a family around £7,000 a year to replace meat they obtained themselves through hunting with imported meat. Economic opportunities are scarce, and for many people state benefits are their only income.

While the Inuit may not actually starve if hunting and trapping are curtailed by climate change, there has certainly been an impact on people's health. Obesity, heart disease and diabetes are beginning to appear in a people for whom these have never before been problems. There has been a crisis of identity as the traditional skills of hunting, trapping and preparing skins have begun to disappear. In Nunavut's 'igloo and email' society, where adults who were born in igloos have children who may never have been out on the land, there's a high incidence of depression.

With so much at stake, the Inuit are determined to play a key role in teasing out the mysteries of climate change in the Arctic. Having survived there for centuries, they believe their wealth of traditional knowledge is vital to the task. And Western scientists are starting to draw on this wisdom, increasingly referred to as 'Inuit Qaujimajatuqangit', or IQ. 'In the early days scientists ignored us when they came up here to study anything. They just figured these people don't know very much so we won't ask them,' says John Amagoalik, an Inuit leader and politician. 'But in recent years IQ has had much more credibility and weight.' In fact it is now a requirement for anyone hoping to get permission to do research that they consult the communities, who are helping to set the research agenda to reflect their most important concerns. They can turn down applications from scientists they believe will work against their interests, or research projects that will impinge too much on their daily lives and traditional activities.

Some scientists doubt the value of traditional knowledge because the occupation of the Arctic doesn't go back far enough. Others, however, point out that the first weather stations in the far north date back just 50 years. There are still huge gaps in our environmental knowledge, and despite the scientific onslaught, many predictions are no more than best guesses. IQ could help to bridge the gap and resolve the tremendous uncertainty about how much of what we're seeing is natural capriciousness and how much is the consequence of human activity.

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

Reducing the Effects of Climate Change

Mark Rowe reports on the increasingly ambitious geo-engineering projects being explored by scientists 

Such is our dependence on fossil fuels, and such is the volume of carbon dioxide already released into the atmosphere, that many experts agree that significant global warming is now inevitable. They believe that the best we can do is keep it at a reasonable level, and at present the only serious option for doing this is cutting back on our carbon emissions. But while a few countries are making major strides in this regard, the majority are having great difficulty even stemming the rate of increase, let alone reversing it. Consequently, an increasing number of scientists are beginning to explore the alternative of geo-engineering — a term which generally refers to the intentional large-scale manipulation of the environment. According to its proponents, geo-engineering is the equivalent of a backup generator: if Plan A - reducing our dependency on fossil fuels - fails, we require a Plan B, employing grand schemes to slow down or reverse the process of global warming.

Geo-engineering; has been shown to work, at least on a small localised scale. For decades, MayDay parades in Moscow have taken place under clear blue skies, aircraft having deposited dry ice, silver iodide and cement powder to disperse clouds. Many of the schemes now suggested look to do the opposite, and reduce the amount of sunlight reaching the planet. The most eye-catching idea of all is suggested by Professor Roger Angel of the University of Arizona. His scheme would employ up to 16 trillion minute spacecraft, each weighing about one gram, to form a transparent, sunlight-refracting sunshade in an orbit 1.5 million km above the Earth. This could, argues Angel, reduce the amount of light reaching the Earth by two per cent.

The majority of geo-engineering projects so far carried out — which include planting forests in deserts and depositing iron in the ocean to stimulate the growth of algae - have focused on achieving a general cooling of the Earth. But some look specifically at reversing the melting at the poles, particularly the Arctic. The reasoning is that if you replenish the ice sheets and frozen waters of the high latitudes, more light will be reflected back into space, so reducing the warming of the oceans and atmosphere.

The concept of releasing aerosol sprays into the stratosphere above the Arctic has been proposed by several scientists. This would involve using sulphur or hydrogen sulphide aerosols so that sulphur dioxide would form clouds, which would, in turn, lead to a global dimming. The idea is modelled on historic volcanic explosions, such as that of Mount Pinatubo in the Philippines in 1991, which led to a short-term cooling of global temperatures by 0.5 °C. Scientists have also scrutinised whether it's possible to preserve the ice sheets of Greenland with reinforced high-tension cables, preventing icebergs from moving into the sea. Meanwhile in the Russian Arctic, geo-engineering plans include the planting of millions of birch trees. Whereas the -regions native evergreen pines shade the snow an absorb radiation, birches would shed their leaves in winter, thus enabling radiation to be reflected by the snow. Re-routing Russian rivers to increase cold water flow to ice-forming areas could also be used to slow down warming, say some climate scientists.

But will such schemes ever be implemented? Generally speaking, those who are most cautious about geo-engineering are the scientists involved in the research. Angel says that his plan is ‘no substitute for developing renewable energy: the only permanent solution'. And Dr Phil Rasch of the US-based Pacific Northwest National Laboratory is equally guarded about the role of geo-engineering: 'I think all of us agree that if we were to end geo-engineering on a given day, then the planet would return to its pre-engineered condition very rapidly, and probably within ten to twenty years. That’s certainly something to worry about.’

The US National Center for Atmospheric Research has already suggested that the proposal to inject sulphur into the atmosphere might affect rainfall patterns across the tropics and the Southern Ocean. ‘Geo-engineering plans to inject stratospheric aerosols or to seed clouds would act to cool the planet, and act to increase the extent of sea ice,’ says Rasch. ‘But all the models suggest some impact on the distribution of precipitation.’

A further risk with geo-engineering projects is that you can “overshoot Y says Dr Dan Hunt, from the University of Bristol’s School of Geophysical Sciences, who has studied the likely impacts of the sunshade and aerosol schemes on the climate. ‘You may bring global temperatures back to pre-industrial levels, but the risk is that the poles will still be warmer than they should be and the tropics will be cooler than before industrialisation.’To avoid such a scenario,” Hunt says, “Angel’s project would have to operate at half strength; all of which reinforces his view that the best option is to avoid the need for geo-engineering altogether.”

The main reason why geo-engineering is supported by many in the scientific community is that most researchers have little faith in the ability of politicians to agree - and then bring in — the necessary carbon cuts. Even leading conservation organisations see the value of investigating the potential of geo-engineering. According to Dr Martin Sommerkorn, climate change advisor for the World Wildlife Fund’s International Arctic Programme, ‘Human-induced climate change has brought humanity to a position where we shouldn’t exclude thinking thoroughly about this topic and its possibilities.’

The Concept of Role Theory


Any individual in any situation occupies a role in relation to other people. The particular individual with whom one is concerned in the analysis of any situation is usually given the name of focal person. He has the focal role and can be regarded as sitting in the middle of a group of people, with whom he interacts in some way in that situation. This group of people is called his role set. For instance, in the family situation, an individual’s role set might be shown as in Figure 6.

Role set

The role set should include all those with whom the individual has more than trivial interactions.

Role definition

The definition of any individual’s role in any situation will be a combination of the role expectations that the members of the role set have of the focal role. These expectations are often occupationally denned, sometimes even legally so. The role definitions of lawyers and doctors are fairly clearly defined both in legal and in cultural terms. The role definitions of, say, a film star or bank manager, are also fairly clearly defined in cultural terms, too clearly perhaps.

Individuals often find it hard to escape from the role that cultural traditions have defined for them. Not only with doctors or lawyers is the required role behaviour so constrained that if you are in that role for long it eventually becomes part of you, part of your personality. Hence, there is some likelihood that all accountants will be alike or that all blondes are similar - they are forced that way by the expectations of their role. 

It is often important that you make it clear what your particular role is at a given time. The means of doing this are called, rather obviously, role signs. The simplest of role signs is a uniform. The number of stripes on your arm or pips on your shoulder is a very precise role definition which allows you to do certain very prescribed things in certain situations. Imagine yourself questioning a stranger on a dark street at midnight without wearing the role signs of a policeman!

In social circumstances, dress has often been used as a role sign to indicate the nature and degree of formality of any gathering and occasionally the social status of people present. The current trend towards blurring these role signs in dress is probably democratic, but it also makes some people very insecure. Without role signs, who is to know who has what role?

Place is another role sign. Managers often behave very differently outside the office and in it, even to the same person. They use a change of location to indicate a change in role from, say, boss to friend. Indeed, if you wish to change your roles you must find some outward sign that you are doing so or you won’t be permitted to change - the subordinate will continue to hear you as his boss no matter how hard you try to be his friend. In very significant cases of role change, e.g. from a soldier in the ranks to officer, from bachelor to married man, the change of role has to have a very obvious sign, hence rituals. It is interesting to observe, for instance, some decline in the emphasis given to marriage rituals. This could be taken as an indication that there is no longer such a big change in role from single to married person, and therefore no need for a public change in sign.

In organisations, office signs and furniture are often used as role signs. These and other perquisites of status are often frowned upon, but they may serve a purpose as a kind of uniform in a democratic society; roles without signs often lead to confused or differing expectations of the role of the focal person. 

Role ambiguity

Role ambiguity results when there is some uncertainty in the minds, either of the focal person or of the members of his role set, as to precisely what his role is at any given time. One of the crucial expectations that shape the role definition is that of the individual, the focal person himself. If his occupation of the role is unclear, or if it differs from that of the others in the role set, there will be a degree of role ambiguity. Is this bad? Not necessarily, for the ability to shape one’s own role is one of the freedoms that many people desire, but the ambiguity may lead to role stress which will be discussed later on. The virtue of j ob descriptions is that they lessen this role ambiguity. Unfortunately, job descriptions are seldom complete role definitions, except at the lower end of the scale. At middle and higher management levels, they are often a list of formal jobs and duties that say little about the more subtle and informal expectations of the role. The result is therefore to give the individual an uncomfortable feeling that there are things left unsaid, i. e. to heighten the sense of role ambiguity.

Looking at role ambiguity from the other side, from the point of view of the members of the role set, lack of clarity in the role of the focal person can cause insecurity, lack of confidence, irritation and even anger among members of his role set. One list of the roles of a manager identified the following: executive, planner, policy maker, expert, controller of rewards and punishments, counsellor, friend, teacher. If it is not clear, through role signs of one sort or another, which role is currently the operational one, the other party may not react in the appropriate way — we may, in fact, hear quite another message if the focal person speaks to us, for example, as a teacher and we hear her as an executive. 

 

 

Why Risks Can Go Wrong

Human intuition is a bad guide to handling risk


People make terrible decisions about the future. The evidence is all around, from their investments in the stock markets to the way they run their businesses. In fact, people are consistently bad at dealing with uncertainty, underestimating some kinds of risk and overestimating others. Surely there must be a better way than using intuition?


In the 1960s a young American research psychologist, Daniel Kahneman, became interested in people's inability to make logical decisions. That launched him on a career to show just how irrationally people behave in practice. When Kahneman and his colleagues first started work, the idea of applying psychological insights to economics and business decisions was seen as rather bizarre. But in the past decade the fields of behavioural finance and behavioural economics have blossomed, and in 2002 Kahneman shared a Nobel prize in economics for his work. Today he is in demand by business organizations and international banking companies. But, he says, there are plenty of institutions that still fail to understand the roots of their poor decisions. He claims that, far from being random, these mistakes are systematic and predictable.

One common cause of problems in decision-making is over-optimism. Ask most people about the future, and they will see too much blue sky ahead, even if past experience suggests otherwise. Surveys have shown that people's forecasts of future stock market movements are far more optimistic than past long-term returns would justify. The same goes for their hopes of ever-rising prices for their homes or doing well in games of chance. Such optimism can be useful for managers or sportsmen, and sometimes turns into a self-fulfilling prophecy. But most of the time it results in wasted effort and dashed hopes. Kahneman's work points to three types of over-confidence. First, people tend to exaggerate their own skill and prowess; in polls, far fewer than half the respondents admit to having below-average skills in, say, driving. Second, they overestimate the amount of control they have over the future, forgetting about luck and chalking up success solely to skill. And third, in competitive pursuits such as dealing on shares, they forget that they have to judge their skills against those of the competition.

Another source of wrong decisions is related to the decisive effect of the initial meeting, particularly in negotiations over money. This is referred to as the 'anchor effect'. Once a figure has been mentioned, it takes a strange hold over the human mind. The asking price quoted in a house sale, for example, tends to become accepted by all parties as the 'anchor' around which negotiations take place. Much the same goes for salary negotiations or mergers and acquisitions. If nobody has much information to go on, a figure can provide comfort - even though it may lead to a terrible mistake. 

In addition, mistakes may arise due to stubbornness. No one likes to abandon a cherished belief, and the earlier a decision has been taken, the harder it is to abandon it. Drug companies must decide early to cancel a failing research project to avoid wasting money, but may find it difficult to admit they have made a mistake. In the same way, analysts may have become wedded early to a single explanation that coloured their perception. A fresh eye always helps.

People also tend to put a lot of emphasis on things they have seen and experienced themselves, which may not be the best guide to decision-making. For example, somebody may buy an overvalued share because a relative has made thousands on it, only to get his fingers burned. In finance, too much emphasis on information close at hand helps to explain the tendency by most investors to invest only within the country they live in. Even though they know that diversification is good for their portfolio, a large majority of both Americans and Europeans invest far too heavily in the shares of their home countries. They would be much better off spreading their risks more widely.

More information is helpful in making any decision but, says Kahneman, people spend proportionally too much time on small decisions and not enough on big ones. They need to adjust the balance. During the boom years, some companies put as much effort into planning their office party as into considering strategic mergers.

Finally, crying over spilled milk is not just a waste of time; it also often colours people's perceptions of the future. Some stock market investors trade far too frequently because they are chasing the returns on shares they wish they had bought earlier.

Kahneman reckons that some types of businesses are much better than others at dealing with risk. Pharmaceutical companies, which are accustomed to many failures and a few big successes in their drug-discovery programmes, are fairly rational about their risk-taking. But banks, he says, have a long way to go. They may take big risks on a few huge loans, but are extremely cautious about their much more numerous loans to small businesses, many of which may be less risky than the big ones. And the research has implications for governments too. They face a whole range of sometimes conflicting political pressures, which means they are even more likely to take irrational decisions.

 

 

Highs & Lows

Hormone levels - and hence our moods -may be affected by the weather. Gloomy weather can cause depression, but sunshine appears to raise the spirits. In Britain, for example, the dull weather of winter drastically cuts down the amount of sunlight that is experienced which strongly affects some people. They become so depressed and lacking in energy that their work and social life are affected. This condition has been given the name SAD (Seasonal Affective Disorder). Sufferers can fight back by making the most of any sunlight in winter and by spending a few hours each day under special, full-spectrum lamps. These provide more ultraviolet and blue-green light than ordinary fluorescent and tungsten lights. Some Russian scientists claim that children learn better after being exposed to ultraviolet light. In warm countries, hours of work are often arranged so that workers can take a break, or even a siesta, during the hottest part of the day. Scientists are working to discover the links between the weather and human beings' moods and performance.

It is generally believed that tempers grow shorter in hot, muggy weather. There is no doubt that 'crimes against the person' rise in the summer, when the weather is hotter and fall in the winter when the weather is colder. Research in the United States has shown a relationship between temperature and street riots. The frequency of riots rises dramatically as the weather gets warmer, hitting a peak around 27-30°C. But is this effect really due to a mood change caused by the heat? Some scientists argue that trouble starts more often in hot weather merely because there are more people in the street when the weather is good.

Psychologists have also studied how being cold affects performance. Researchers compared divers working in icy cold water at 5°C with others in water at 20°C (about swimming pool temperature). The colder water made the divers worse at simple arithmetic and other mental tasks. But significantly, their performance was impaired as soon as they were put into the cold water - before their bodies had time to cool down. This suggests that the low temperature did not slow down mental functioning directly, but the feeling of cold distracted the divers from their tasks.

Psychologists have conducted studies showing that people become less sceptical and more optimistic when the weather is sunny However, this apparently does not just depend on the temperature. An American psychologist studied customers in a temperature-controlled restaurant. They gave bigger tips when the sun was shining and smaller tips when it wasn't, even though the temperature in the restaurant was the same. A link between weather and mood is made believable by the evidence for a connection between behaviour and the length of the daylight hours. This in turn might involve the level of a hormone called melatonin, produced in the pineal gland in the brain. The amount of melatonin falls with greater exposure to daylight. Research shows that melatonin plays an important part in the seasonal behaviour of certain animals. For example, food consumption of stags increases during the winter, reaching a peak in February/ March. It falls again to a low point in May, then rises to a peak in September, before dropping to another minimum in November. These changes seem to be triggered by varying melatonin levels.

In the laboratory, hamsters put on more weight when the nights are getting shorter and their melatonin levels are falling. On the other hand, if they are given injections of melatonin, they will stop eating altogether. It seems that time cues provided by the changing lengths of day and night trigger changes in animals' behaviour - changes that are needed to cope with the cycle of the seasons. People's moods too, have been shown to react to the length of the daylight hours. Sceptics might say that longer exposure to sunshine puts people in a better mood because they associate it with the happy feelings of holidays and freedom from responsibility. However, the belief that rain and murky weather make people more unhappy is borne out by a study in Belgium, which showed that a telephone counselling service gets more telephone calls from people with suicidal feelings when it rains.

When there is a thunderstorm brewing, some people complain of the air being 'heavy' and of feeling irritable, moody and on edge. They may be reacting to the fact that the air can become slightly positively charged when large thunderclouds are generating the intense electrical fields that cause lightning flashes. The positive charge increases the levels of serotonin (a chemical involved in sending signals in the nervous system). High levels of serotonin in certain areas of the nervous system make people more active and reactive and, possibly, more aggressive. When certain winds are blowing, such as the Mistral in southern France and the Fohn in southern Germany, mood can be affected - and the number of traffic accidents rises. It may be significant that the concentration of positively charged particles is greater than normal in these winds. In the United Kingdom, 400,000 ionizers are sold every year. These small machines raise the number of negative ions in the air in a room. Many people claim they feel better in negatively charged air. 

 

 

Persistent bullying is one of the worst experiences a child can face

How can it be prevented? Peter Smith, Professor of Psychology at the University of Sheffield, directed the Sheffield Anti-Bullying Intervention Project, funded by the Department for Education

Here he reports on his findings.

Bullying can take a variety of forms, from the verbal - being taunted or called hurtful names - to the physical - being kicked or shoved - as well as indirect forms, such as being excluded from social groups. A survey I conducted with Irene Whitney found that in British primary schools up to a quarter of pupils reported experience of bullying, which in about one in ten cases was persistent. There was less bullying in secondary schools, with about one in twenty-five suffering persistent bullying, but these cases may be particularly recalcitrant.

Bullying is clearly unpleasant, and can make the child experiencing it feel unworthy and depressed. In extreme cases it can even lead to suicide, though this is thankfully rare. Victimised pupils are more likely to experience difficulties with interpersonal relationships as adults, while children who persistently bully are more likely to grow up to be physically violent, and convicted of anti-social offences.

Until recently, not much was known about the topic, and little help was available to teachers to deal with bullying. Perhaps as a consequence, schools would often deny the problem. ‘There is no bullying at this school’ has been a common refrain, almost certainly untrue. Fortunately more schools are now saying: There is not much bullying here, but when it occurs we have a clear policy for dealing with it.

Three factors are involved in this change. First is an awareness of the severity of the problem. Second, a number of resources to help tackle bullying have become available in Britain. For example, the Scottish Council for Research in Education produced a package of materials, Action Against Bullying, circulated to all schools in England and Wales as well as in Scotland in summer 1992, with a second pack, Supporting Schools Against Bullying, produced the following year. In Ireland, Guidelines on Countering Bullying Behaviour in Post-Primary Schools was published in 1993. Third, there is evidence that these materials work, and that schools can achieve something. This comes from carefully conducted ‘before and after’ evaluations of interventions in schools, monitored by a research team. In Norway, after an intervention campaign was introduced nationally, an evaluation of forty-two schools suggested that, over a two-year period, bullying was halved. The Sheffield investigation, which involved sixteen primary schools and seven secondary schools, found that most schools succeeded in reducing bullying.

Evidence suggests that a key step is to develop a policy on bullying, saying clearly what is meant by bullying, and giving explicit guidelines on what will be done if it occurs, what records will be kept, who will be informed, what sanctions will be employed. The policy should be developed through consultation, over a period of time - not just imposed from the head teacher’s office! Pupils, parents and staff should feel they have been involved in the policy, which needs to be disseminated and implemented effectively.

Other actions can be taken to back up the policy. There are ways of dealing with the topic through the curriculum, using video, drama and literature. These are useful for raising awareness, and can best be tied in to early phases of development, while the school is starting to discuss the issue of bullying. They are also useful in renewing the policy for new pupils, or revising it in the light of experience. But curriculum work alone may only have short-term effects; it should be an addition to policy work, not a substitute.

There are also ways of working with individual pupils, or in small groups. Assertiveness training for pupils who are liable to be victims is worthwhile, and certain approaches to group bullying such as 'no blame’, can be useful in changing the behaviour of bullying pupils without confronting them directly, although other sanctions may be needed for those who continue with persistent bullying.

Work in the playground is important, too. One helpful step is to train lunchtime supervisors to distinguish bullying from playful fighting, and help them break up conflicts. Another possibility is to improve the playground environment, so that pupils are less likely to be led into bullying from boredom or frustration.

With these developments, schools can expect that at least the most serious kinds of bullying can largely be prevented. The more effort put in and the wider the whole school involvement, the more substantial the results are likely to be. The reduction in bullying - and the consequent improvement in pupil happiness - is surely a worthwhile objective.

 

 

Second nature

Your personality isn't necessarily set in stone. With a little experimentation, people can reshape their temperaments and inject passion, optimism, joy and courage into their lives

Psychologists have long held that a person's character cannot undergo a transformation in any meaningful way and that the key traits of personality are determined at a very young age. However, researchers have begun looking more closely at ways we can change. Positive psychologists have identified 24 qualities we admire, such as loyalty and kindness, and are studying them to find out why they come so naturally to some people. What they're discovering is that many of these qualities amount to habitual behaviour that determines the way we respond to the world. The good news is that all this can be learned.

Some qualities are less challenging to develop than others, optimism being one of them. However, developing qualities requires mastering a range of skills which are diverse and sometimes surprising. For example, to bring more joy and passion into your life, you must be open to experiencing negative emotions. Cultivating such qualities will help you realise your full potential.

'The evidence is good that most personality traits can be altered,' says Christopher Peterson, professor of psychology at the University of Michigan, who cites himself as an example. Inherently introverted, he realised early on that as an academic, his reticence would prove disastrous in the lecture hall. So he learned to be more outgoing and to entertain his classes. 'Now my extroverted behaviour is spontaneous,' he says.

David Fajgenbaum had to make a similar transition. He was preparing for university, when he had an accident that put an end to his sports career. On campus, he quickly found that beyond ordinary counselling, the university had no services for students who were undergoing physical rehabilitation and suffering from depression like him. He therefore launched a support group to help others in similar situations. He took action despite his own pain - a typical response of an optimist.

Suzanne Segerstrom, professor of psychology at the University of Kentucky, believes that the key to increasing optimism is through cultivating optimistic behaviour, rather than positive thinking. She recommends you train yourself to pay attention to good fortune by writing down three positive things that come about each day. This will help you convince yourself that favourable outcomes actually happen all the time, making it easier to begin taking action.

You can recognise a person who is passionate about a pursuit by the way they are so strongly involved in it. Tanya Streeter's passion is freediving - the sport of plunging deep into the water without tanks or other breathing equipment. Beginning in 1998, she set nine world records and can hold her breath for six minutes. The physical stamina required for this sport is intense but the psychological demands are even more overwhelming. Streeter learned to untangle her fears from her judgment of what her body and mind could do. 'In my career as a competitive freediver, there was a limit to what I could do - but it wasn't anywhere near what I thought it was/ she says.

Finding a pursuit that excites you can improve anyone's life. The secret about consuming passions, though, according to psychologist Paul Silvia of the University of North Carolina, is that 'they require discipline, hard work and ability, which is why they are so rewarding.' Psychologist Todd Kashdan has this advice for those people taking up a new passion: 'As a newcomer, you also have to tolerate and laugh at your own ignorance. You must be willing to accept the negative feelings that come your way,' he says.

In 2004, physician-scientist Mauro Zappaterra began his PhD research at Harvard Medical School. Unfortunately, he was miserable as his research wasn't compatible with his curiosity about healing. He finally took a break and during eight months in Santa Fe, Zappaterra learned about alternative healing techniques not taught at Harvard. When he got back, he switched labs to study how cerebrospinal fluid nourishes the developing nervous system. He also vowed to look for the joy in everything, including failure, as this could help him learn about his research and himself.

One thing that can hold joy back is a person's concentration on avoiding failure rather than their looking forward to doing something well. 'Focusing on being safe might get in the way of your reaching your goals,' explains Kashdan. For example, are you hoping to get through a business lunch without embarrassing yourself, or are you thinking about how fascinating the conversation might be?

Usually, we think of courage in physical terms but ordinary life demands something else. For marketing executive Kenneth Pedeleose, it meant speaking out against something he thought was ethically wrong. The new manager was intimidating staff so Pedeleose carefully recorded each instance of bullying and eventually took the evidence to a senior director, knowing his own job security would be threatened. Eventually the manager was the one to go. According to Cynthia Pury, a psychologist at Clemson University, Pedeleose's story proves the point that courage is not motivated by fearlessness, but by moral obligation. Pury also believes that people can acquire courage. Many of her students said that faced with a risky situation, they first tried to calm themselves down, then looked for a way to mitigate the danger, just as Pedeleose did by documenting his allegations.

Over the long term, picking up a new character trait may help you move toward being the person you want to be. And in the short term, the effort itself could be surprisingly rewarding, a kind of internal adventure.

Wheel of Fortune

Emma Duncan discusses the potential effects on the entertainment industry of the digital revolution

Since moving pictures were invented a century ago, a new way of distributing entertainment to consumers has emerged about once every generation. Each such innovation has changed the industry irreversibly; each has been accompanied by a period of fear mixed with exhilaration. The arrival of digital technology, which translates music, pictures and text into the zeros and ones of computer language, marks one of those periods.

This may sound familiar, because the digital revolution, and the explosion of choice that would go with it, has been heralded for some time. In 1992, John Malone, chief executive of TCI, an American cable giant, welcomed the '500-channel universe'. Digital television was about to deliver everything except pizzas to people's living rooms. When the entertainment companies tried out the technology, it worked fine - but not at a price that people were prepared to pay.

Those 500 channels eventually arrived but via the Internet and the PC rather than through television. The digital revolution was starting to affect the entertainment business in unexpected ways. Eventually it will change every aspect of it, from the way cartoons are made to the way films are screened to the way people buy music. That much is clear. What nobody is sure of is how it will affect the economics of the business.

New technologies always contain within them both threats and opportunities. They have the potential both to make the companies in the business a great deal richer, and to sweep them away. Old companies always fear new technology. Hollywood was hostile to television, television terrified by the VCR. Go back far enough, points out Hal Varian, an economist at the University of California at Berkeley, and you find publishers complaining that 'circulating libraries' would cannibalise their sales. Yet whenever a new technology has come in, it has made more money for existing entertainment companies. The proliferation of the means of distribution results, gratifyingly, in the proliferation of dollars, pounds, pesetas and the rest to pay for it.

All the same, there is something in the old companies' fears. New technologies may not threaten their lives, but they usually change their role. Once television became widespread, film and radio stopped being the staple form of entertainment. Cable television has undermined the power of the broadcasters. And as power has shifted the movie studios, the radio companies and the television broadcasters have been swallowed up. These days, the grand old names of entertainment have more resonance than power. Paramount is part of Viacom, a cable company; Universal, part of Seagram, a drinks-and-entertainment company; MGM, once the roaring lion of Hollywood, has been reduced to a whisper because it is not part of one of the giants. And RCA, once the most important broadcasting company in the world, is now a recording label belonging to Bertelsmann, a large German entertainment company.

Part of the reason why incumbents got pushed aside was that they did not see what was coming. But they also faced a tighter regulatory environment than the present one. In America, laws preventing television broadcasters from owning programme companies were repealed earlier this decade, allowing the creation of vertically integrated businesses. Greater freedom, combined with a sense of history, prompted the smarter companies in the entertainment business to re-invent themselves. They saw what happened to those of their predecessors who were stuck with one form of distribution.

So, these days, the powers in the entertainment business are no longer movie studios, or television broadcasters, or publishers; all those businesses have become part of bigger businesses still, companies that can both create content and distribute it in a range of different ways.

Out of all this, seven huge entertainment companies have emerged - Time Warner, Walt Disney, Bertelsmann, Viacom, News Corp, Seagram and Sony. They cover pretty well every bit of the entertainment business except pornography. Three are American, one is Australian, one Canadian, one German and one Japanese. 'What you are seeing', says Christopher Dixon, managing director of media research at PaineWebber, a stockbroker, 'is the creation of a global oligopoly.

It happened to the oil and automotive businesses earlier this century; now it is happening to the entertainment business.' It remains to be seen whether the latest technology will weaken those great companies, or make them stronger than ever.

Highlight Highlight Highlight|Remove Highlight|Dictionary

Bamboo

Bamboo is a common woody plant. It grows tall and thin. It looks almost like a tree! It grows about twenty five metres tall. It is about fifteen centimetres wide. Bamboo looks like it is made of many small pieces. Thick lines divide it into small segments. And the inside of bamboo is empty. But it is hard and very strong.

Many people think bamboo is a tree. But it is not - it is a kind of grass. It grows mainly in East and South East Asia. It also grows in Latin America, India and parts of Africa and Australia. Bamboo grows extremely fast and spreads very quickly. There are 1500 different kinds of bamboo. People all over the world use it. And people are planting more of it. Some people call bamboo ‘the crop of the future.’ They have many good reasons to plant bamboo.

There are over 1,000 uses for bamboo! People in the past used bamboo for many things. They made musical instruments and weapons with bamboo. Artists used it for paintbrushes and paper. Fishermen used it to make equipment for catching fish. Some people even made boats from bamboo!

In China and India, doctors use bamboo in traditional medicine. Bamboo is also very useful for cooking. People put food inside the empty bamboo plant. This is a good container for cooking soup, rice or tea. But people also eat bamboo as a healthy food. People eat the soft part, or shoot, of the bamboo in many ways. Most Asian countries have special foods made from bamboo shoots.

Bamboo has been used in traditional buildings. But builders also use it today! The village of Noh Bo is just one example.

There are many modern uses for bamboo. In 1879 Thomas Edison created the first light bulb. He made it with treated bamboo!

People also use bamboo to make cloth. Beauty products sometimes contain bamboo. It is even in some water filters, to clean water! People have even designed vehicles and airplanes out of bamboo. In Ghana, people even make two wheeled bicycles from bamboo. In the Philippines, people make electricity from bamboo! Buildings, bicycles, light bulbs and even electricity: bamboo is an amazing plant.

These are just a few of the many ways people use bamboo. But bamboo is useful for a much more important reason. It is useful while it grows! Growing bamboo helps the environment in many ways. Bamboo provides oxygen, which improves air quality. It also reduces harmful carbon dioxide in the air. It does this more quickly than trees. Bamboo also provides shade and shelter from the sun.

In many places, hardwood trees are cut down for fuel or for building. This causes problems for the earth, animals, plants and air. To keep a good environment, people must replace the trees. But it takes a very long time for most trees to reach their full size. Many hardwood trees take 50 years to grow!

Bamboo is ready to use in only three years. Bamboo is the fastest growing woody plant in the world. It can grow about 60 centimeters in only one day. The bamboo plant grows to its full size in just three or four months. Some kinds of bamboo then become dry and hard. In three years, it is strong enough to harvest and use. And bamboo grows again when it is cut down. People can harvest it year after year.

Some people are sure that bamboo is ‘the crop of the future’. For example, Nicaragua has many hardwood forests. But people are cutting down three percent of the forests every year. One organization, Eco-planet Bamboo, is trying to replace these trees with bamboo.

Eco-Planet Bamboo planted a large bamboo farm. Through this farm, Eco-Planet Bamboo hopes to improve the environment. They also hope to improve life for local people. Bamboo is helping to reduce poverty in Nicaragua. 

In Nicaragua, bamboo is providing jobs. Around the world, it is improving the environment and the economy. It is easy to see why people call bamboo the ‘crop of the future.’

 

 

 

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas


When PeterOsbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

 

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eo-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

 

 

Plans to protect the forests of Europe

Forests are one of the main elements of our natural heritage. The decline of Europe's forests over the last decade and a half has led to an increasing awareness and understanding of the serious imbalances which threaten them. European countries are becoming increasingly concerned by major threats to European forests, threats which know no frontiers other than those of geography or climate: air pollution, soil deterioration, the increasing number of forest fires and sometimes even the mismanagement of our woodland and forest heritage. There has been a growing awareness of the need for countries to get together to co-ordinate their policies. In December 1990, Strasbourg hosted the first Ministerial Conference on the protection of Europe's forests. The conference brought together 31 countries from both Western and Eastern Europe. The topics discussed included the co-ordinated study of the destruction of forests, as well as how to combat forest fires and the extension of European research programs on the forest ecosystem. The preparatory work for the conference had been undertaken at two meetings of experts. Their initial task was to decide which of the many forest problems of concern to Europe involved the largest number of countries and might be the subject of joint action. Those confined to particular geographical areas, such as countries bordering the Mediterranean or the Nordic countries therefore had to be discarded. However, this does not mean that in future they will be ignored. 

As a whole, European countries see forests as performing a triple function: biological, economic and recreational. The first is to act as a 'green lung' for our planet; by means of photosynthesis, forests produce oxygen through the transformation of solar energy, thus fulfilling what for humans is the essential role of an immense, non-polluting power plant. At the same time, forests provide raw materials for human activities through their constantly renewed production of wood. Finally, they offer those condemned to spend five days a week in an urban environment an unrivalled area of freedom to unwind and take part in a range of leisure activities, such as hunting, riding and hiking. The economic importance of forests has been understood since the dawn of man - wood was the first fuel. The other aspects have been recognised only for a few centuries but they are becoming more and more important. Hence, there is a real concern throughout Europe about the damage to the forest environment which threatens these three basic roles.

The myth of the 'natural' forest has survived, yet there are effectively no remaining 'primary' forests in Europe. All European forests are artificial, having been adapted and exploited by man for thousands of years. This means that a forest policy is vital, that it must transcend national frontiers and generations of people, and that it must allow for the inevitable changes that take place in the forests, in needs, and hence in policy. The Strasbourg conference was one of the first events on such a scale to reach this conclusion. A general declaration was made that 'a central place in any ecologically coherent forest policy must be given to continuity over time and to the possible effects of unforeseen events, to ensure that the full potential of these forests is maintained'.

That general declaration was accompanied by six detailed resolutions to assist national policymaking. The first proposes the extension and systematisation of surveillance sites to monitor forest decline. Forest decline is still poorly understood but leads to the loss of a high proportion of a tree's needles or leaves. The entire continent and the majority of species are now affected: between 30% and 50% of the tree population. The condition appears to result from the cumulative effect of a number of factors, with atmospheric pollutants the principal culprits. Compounds of nitrogen and sulphur dioxide should be particularly closely watched. However, their effects are probably accentuated by climatic factors, such as drought and hard winters, or soil imbalances such as soil acidification, which damages the roots. The second resolution concentrates on the need to preserve the genetic diversity of European forests. The aim is to reverse the decline in the number of tree species or at least to preserve the 'genetic material' of all of them. Although forest fires do not affect all of Europe to the same extent, the amount of damage caused the experts to propose as the third resolution that the Strasbourg conference consider the establishment of a European databank on the subject. All information used in the development of national preventative policies would become generally available. The subject of the fourth resolution discussed by the ministers was mountain forests. In Europe, it is undoubtedly the mountain ecosystem which has changed most rapidly and is most at risk. A thinly scattered permanent population and development of leisure activities, particularly skiing, have resulted in significant long-term changes to the local ecosystems. Proposed developments include a preferential research program on mountain forests. The fifth resolution relaunched the European research network on the physiology of trees, called Eurosilva. Eurosilva should support joint European research on tree diseases and their physiological and biochemical aspects. Each country concerned could increase the number of scholarships and other financial support for doctoral theses and research projects in this area. Finally, the conference established the framework for a European research network on forest ecosystems. This would also involve harmonising activities in individual countries as well as identifying a number of priority research topics relating to the protection of forests. The Strasbourg conference's main concern was to provide for the future. This was the initial motivation, one now shared by all 31 participants representing 31 European countries. Their final text commits them to on-going discussion between government representatives with responsibility for forests.

 

 

The water hyacinth

The water hyacinth grows in tropical countries. It has beautiful purple-blue flowers, but everybody hates it. Why?

Millions and millions of these plants grow in rivers and lakes. Sometimes the plants become so thick that people can walk on them. People cannot travel in boats on the water, and they cannot fish in it. The plants stop the water from moving. Then the water carries diseases.
Farmers cannot use the water on their land.

Now scientists think that water hyacinths can be useful. The plants are really a tree crop. No one has to take care of them. They just grow and grow and grow. What can farmers use them for?

Some fish like to eat them. Farmers can grow these fish in the lakes and rivers. Workers can collect and cut the plants with machines. Then they can make fertilizer to make their crops grow better. They can also make feed for their farm animals. Maybe it will be possible to make methane gas for energy. (We burn gas from petroleum for energy. Methane gas comes from plants.)

Then poor tropical countries will not have to buy so much expensive petroleum. Maybe in the future people will love the water hyacinth instead of hating it. 

 

 

NATURAL CHOICE Coffee and chocolate

What's the connection between your morning coffee, wintering North American birds and the cool shade of a tree? Actually, unite a lot, says Simon Birch.

When scientists from London’s Natural History Museum descended on the coffee farms of the tiny Central American republic of F.l Salvador, they were astonished to find such diversity of insect and plant species. During 18 months' work on 12 farms, they found a third more species of parasitic wasp than arc known to exist in the whole country of Costa Rica. They described four new species and are aware of a fifth. On 24 farms they found nearly 300 species of tree when they had expected to find about 100.

El Salvador has lost much of its natural forest, with coffee farms covering nearly 10% of the country. Most of them use the ‘shade-grown’ method of production, which utilises a semi-natural forest ecosystem. Alex Munro, the museum’s botanist on the expedition, says: ‘Our findings amazed our insect specialist. There’s a very sophisticated food web present. The wasps, for instance, may depend on specific species of tree.’

It's the same the world over. Species diversity is much higher where coffee is grown in shade conditions. In addition, coffee (and chocolate) is usually grow n in tropical rainforest regions that are biodiversity hotspots. ‘These habitats support up to 70% of the planets plant and animal species, and so the production methods of cocoa and coffee can have a hugely significant impact,' explains Dr Paul Donald of the Royal Society for the. Protection of Birds.

So what does ‘shade-grown’ mean, and why is it good for wildlife? Most of the world's coffee is produced by poor farmers in the developing world. Traditionally they have grown coffee (and cocoa) under the shade of selectively thinned tracts of rain forest in a genuinely sustainable form of farming. Leaf fall from the canopy provides a supply of nutrients and acts as a mulch that suppresses weeds. The insects that live in the canopy pollinate the cocoa and coffee and prey on pests. The trees also provide farmers with fruit and wood for fuel. 

Bird diversity in shade-grown coffee plantations rivals that found in natural forests in the same region.’ says Robert Rice from the Smithsonian Migratory Bird Center. In Ghana, West Africa. - one of the world's biggest producers of cocoa - 90% of the cocoa is grown under shade, and these forest plantations are a vital habitat for wintering European migrant birds. In the same way. the coffee forests of Central and South America are a refuge for wintering North American migrants.

More recently, a combination of the collapse in the world market for coffee and cocoa and a drive to increase yields by producer countries has led to huge swathes of shade-grown coffee and cocoa being cleared to make way for a highly intensive, monoculture pattern of production known as ‘full sun’. But this system not only reduces the diversity of flora and fauna, it also requires huge amounts of pesticides and fertilisers. In Cote d’Ivoire, which produces more than half the world's cocoa, more than a third of the crop is now grown in full-sun conditions. 

The loggers have been busy in the Americas too, where nearly 70% of all Colombian coffee is now produced using full-sun production. One study carried out in Colombia and Mexico found that, compared with shade coffee, full-sun plantations have 95% fewer species of birds.

In LI Salvador. Alex Munro says shade-coffee farms have a cultural as well as ecological significance and people are not happy to see them go. But the financial pressures are great, and few of these coffee farms make much money. ‘One farm we studied, a cooperative of 100 families, made just S 10,000 a year S100 per family and that's not taking labour costs into account.’

The loss of shade-coffee forests has so alarmed a number of North American wildlife organisations that they 're now harnessing consumer power to help save these threatened habitats. They are promoting a ‘certification' system that can indicate to consumers that the beans have been grown on shade plantations. Bird-friendly coffee, for instance, is marketed by the Smithsonian Migratory Bird Center. The idea is that the small extra cost is passed directly on to the coffee farmers as a financial incentive to maintain their shade-coffee farms.

Not all conservationists agree with such measures, however. Some say certification could be leading to the loss not preservation of natural forests. John Rappole of the Smithsonian Conservation and Research Center, for example, argues that shade- grown marketing provides ‘an incentive to convert existing areas of primary forest that are too remote or steep to be converted profitably to other forms of cultivation into shade-coffee plantations’.

Other conservationists, such as Stacey Philpott and colleagues, argue the case for shade coffee. But there are different types of shade growing. Those used by subsistence farmers are virtually identical to natural forest (and have a corresponding diversity), while systems that use coffee plants as the understorey and cacao or citrus trees as the overstorey may be no more diverse than full-sun farms. Certification procedures need to distinguish between the two. and Ms Philpott argues that as long as the process is rigorous and offers financial gains to the producers, shade growing does benefit the environment.

The polar bear

The polar bear is very big white bear. We call it the polar bear because it lives inside the Artic Circle near the North Pole. There are no polar bears at the South Pole.

The polar bear lives in the snow and ice. At the North Pole, there is only snow, ice, and water. There is not any land. People cannot see the polar bear in the snow very well because its coat is yellow-white. It has a very warm coat because the weather is cold north of the Artic Circle.

This bear is three meters long, and it weighs 450 kilos (Kilograms). It can stand up on its back legs because it has very wide feet. It can use its front legs like arms. The polar bear can swim very well. It can swim 120 kilometers out into the water. It catches fish and sea animals for food. It goes into the sea when it is afraid.

Some people want to kill the polar bear for its beautiful white coat. The governments of the United States and Russia say that no one can kill polar bears now. They do not want all of these beautiful animals to die.

 

 

The Rufous Hare-Wallaby

The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, ‘mala’. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced - indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range - but no other traces were found.

Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild.

Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young - known as a joey - in her pouch for about 15 weeks, and she can have more than one joey at the same time.

In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects.

With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles'northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again.

Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the mala’s known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats - a task that had taken two years of hard work.

Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters - only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected.

Today, there are many signs suggesting that the mala population on the island is continuing to do well.

 

 

Great Migrations

Animal migration, however it is defined, is far more than just the movement of animals. It can loosely be described as travel that takes place at regular intervals - often in an annual cycle - that may involve many members of a species, and is rewarded only after a long journey. It suggests inherited instinct. The biologist Hugh Dingle has identified five characteristics that apply, in varying degrees and combinations, to all migrations. They are prolonged movements that carry animals outside familiar habitats; they tend to be linear, not zigzaggy; they involve special behaviours concerning preparation (such as overfeeding) and arrival; they demand special allocations of energy. And one more: migrating animals maintain an intense attentiveness to the greater mission, which keeps them undistracted by temptations and undeterred by challenges that would turn other animals aside. 

An arctic tern, on its 20,000 km flight from the extreme south of South America to the Arctic circle, will take no notice of a nice smelly herring offered from a bird-watcher's boat along the way. While local gulls will dive voraciously for such handouts, the tern flies on. Why? The arctic tern resists distraction because it is driven at that moment by an instinctive sense of something we humans find admirable: larger purpose. In other words, it is determined to reach its destination. The bird senses that it can eat, rest and mate later. Right now it is totally focused on the journey; its undivided intent is arrival.

Reaching some gravelly coastline in the Arctic, upon which other arctic terns have converged, will serve its larger purpose as shaped by evolution: finding a place, a time, and a set of circumstances in which it can successfully hatch and rear offspring.

But migration is a complex issue, and biologists define it differently, depending in part on what sorts of animals they study. Joe! Berger, of the University of Montana, who works on the American pronghorn and other large terrestrial mammals, prefers what he calls a simple, practical definition suited to his beasts: 'movements from a seasonal home area away to another home area and back again'. Generally the reason for such seasonal back-and-forth movement is to seek resources that aren't available within a single area year-round.

But daily vertical movements by zooplankton in the ocean - upward by night to seek food, downward by day to escape predators - can also be considered migration. So can the movement of aphids when, having depleted the young leaves on one food plant, their offspring then fly onward to a different host plant, with no one aphid ever returning to where it started.

Dingle is an evolutionary biologist who studies insects. His definition is more intricate than Berger's, citing those five features that distinguish migration from other forms of movement. They allow for the fact that, for example, aphids will become sensitive to blue light (from the sky) when it's time for takeoff on their big journey, and sensitive to yellow light (reflected from tender young leaves) when it's appropriate to land. Birds will fatten themselves with heavy feeding in advance of a long migrational flight. The value of his definition, Dingle argues, is that it focuses attention on what the phenomenon of wildebeest migration shares with the phenomenon of the aphids, and therefore helps guide researchers towards understanding how evolution has produced them all.

Human behaviour, however, is having a detrimental impact on animal migration.

The pronghorn, which resembles an antelope, though they are unrelated, is the fastest land mammal of the New World. One population, which spends the summer in the mountainous Grand Teton National Park of the western USA, follows a narrow route from its summer range in the mountains, across a river, and down onto the plains. Here they wait out the frozen months, feeding mainly on sagebrush blown clear of snow. These pronghorn are notable for the invariance of their migration route and the severity of its constriction at three bottlenecks. If they can't pass through each of the three during their spring migration, they can't reach their bounty of summer grazing; if they can't pass through again in autumn, escaping south onto those windblown plains, they are likely to die trying to overwinter in the deep snow. Pronghorn, dependent on distance vision and speed to keep safe from predators, traverse high, open shoulders of land, where they can see and run. At one of the bottlenecks, forested hills rise to form a V, leaving a corridor of open ground only about 150 metres wide, filled with private homes. Increasing development is leading toward a crisis for the pronghorn, threatening to choke off their passageway.

Conservation scientists, along with some biologists and land managers within the USA's National Park Service and other agencies, are now working to preserve migrational behaviours, not just species and habitats. A National Forest has recognised the path of the pronghorn, much of which passes across its land, as a protected migration corridor. But neither the Forest Service nor the Park Service can control what happens on private land at a bottleneck. And with certain other migrating species, the challenge is complicated further - by vastly greater distances traversed, more jurisdictions, more borders, more dangers along the way. We will require wisdom and resoluteness to ensure that migrating species can continue their journeying a while longer.

 

 

Zoo conservation programmes

One of London Zoo’s recent advertisements caused me some irritation, so patently did it distort reality. Headlined “Without zoos you might as well tell these animals to get stuffed”, it was bordered with illustrations of several endangered species and went on to extol the myth that without zoos like London Zoo these animals “will almost certainly disappear forever”. With the zoo world’s rather mediocre record on conservation, one might be forgiven for being slightly sceptical about such an advertisement.

Zoos were originally created as places of entertainment, and their suggested involvement with conservation didn’t seriously arise until about 30 years ago, when the Zoological Society of London held the first formal international meeting on the subject. Eight years later, a series of world conferences took place, entitled “The Breeding of Endangered Species”, and from this point onwards conservation became the zoo community’s buzzword. This commitment has now been clear defined in The World Zoo Conservation Strategy (WZGS, September 1993), which although an important and welcome document does seem to be based on an unrealistic optimism about the nature of the zoo industry.

The WZCS estimates that there are about 10,000 zoos in the world, of which around 1,000 represent a core of quality collections capable of participating in coordinated conservation programmes. This is probably the document’s first failing, as I believe that 10,000 is a serious underestimate of the total number of places masquerading as zoological establishments. Of course it is difficult to get accurate data but, to put the issue into perspective, I have found that, in a year of working in Eastern Europe, I discover fresh zoos on almost a weekly basis.

The second flaw in the reasoning of the WZCS document is the naive faith it places in its 1,000 core zoos. One would assume that the calibre of these institutions would have been carefully examined, but it appears that the criterion for inclusion on this select list might merely be that the zoo is a member of a zoo federation or association. This might be a good starting point, working on the premise that members must meet certain standards, but again the facts don’t support the theory. The greatly respected American Association of Zoological Parks and Aquariums (AAZPA) has had extremely dubious members, and in the UK the Federation of Zoological Gardens of Great Britain and Ireland has occasionally had members that have been roundly censured in the national press. These include Robin Hill Adventure Park on the Isle of Wight, which many considered the most notorious collection of animals in the country. This establishment, which for years was protected by the Isle’s local council (which viewed it as a tourist amenity), was finally closed down following a damning report by a veterinary inspector appointed under the terms of the Zoo Licensing Act 1981. As it was always a collection of dubious repute, one is obliged to reflect upon the standards that the Zoo Federation sets when granting membership. The situation is even worse in developing countries where little money is available for redevelopment and it is hard to see a way of incorporating collections into the overall scheme of the WZCS.

Even assuming that the WZCS’s 1,000 core zoos are all of a high standard complete with scientific staff and research facilities, trained and dedicated keepers, accommodation that permits normal or natural behaviour, and a policy of co-operating fully with one another what might be the potential for conservation? Colin Tudge, author of Last Animals at the Zoo (Oxford University Press, 1992), argues that “if the world”s zoos worked together in co-operative breeding programmes, then even without further expansion they could save around 2,000 species of endangered land vertebrates’. This seems an extremely optimistic proposition from a man who must be aware of the failings and weaknesses of the zoo industry the man who, when a member of the council of London Zoo, had to persuade the zoo to devote more of its activities to conservation. Moreover, where are the facts to support such optimism?

Today approximately 16 species might be said to have been “saved” by captive breeding programmes, although a number of these can hardly be looked upon as resounding successes. Beyond that, about a further 20 species are being seriously considered for zoo conservation programmes. Given that the international conference at London Zoo was held 30 years ago, this is pretty slow progress, and a long way off Tudge’s target of 2,000.

 

 

The history of the tortoise

If you go back far enough, everything lived in the sea. At various points in evolutionary history, enterprising individuals within many different animal groups moved out onto the land, sometimes even to the most parched deserts, taking their own private seawater with them in blood and cellular fluids. In addition to the reptiles, birds, mammals and insects which we see all around us, other groups that have succeeded out of water include scorpions, snails, crustaceans such as woodlice and land crabs, millipedes and centipedes, spiders and various worms. And we mustn’t forget the plants, without whose prior invasion of the land none of the other migrations could have happened.

Moving from water to land involved a major redesign of every aspect of life, including breathing and reproduction. Nevertheless, a good number of thoroughgoing land animals later turned around, abandoned their hard-earned terrestrial re-tooling, and returned to the water again. Seals have only gone part way back. They show us what the intermediates might have been like, on the way to extreme cases such as whales and dugongs. Whales (including the small whales we call dolphins) and dugongs, with their close cousins the manatees, ceased to be land creatures altogether and reverted to the full marine habits of their remote ancestors. They don’t even come ashore to breed. They do, however, still breathe air, having never developed anything equivalent to the gills of their earlier marine incarnation. Turtles went back to the sea a very long time ago and, like all vertebrate returnees to the water, they breathe air. However, they are, in one respect, less fully given back to the water than whales or dugongs, for turtles still lay their eggs on beaches.

There is evidence that all modem turtles are descended from a terrestrial ancestor which lived before most of the dinosaurs. There are two key fossils called Proganochelys quenstedti and Palaeochersis talampayensis dating from early dinosaur times, which appear to be close to the ancestry of all modem turtles and tortoises. You might wonder how we can tell whether fossil animals lived on land or in water, especially if only fragments are found. Sometimes it’s obvious. Ichthyosaurs were reptilian contemporaries of the dinosaurs, with fins and streamlined bodies. The fossils look like dolphins and they surely lived like dolphins, in the water. With turtles it is a little less obvious. One way to tell is by measuring the bones of their forelimbs.

Walter Joyce and Jacques Gauthier, at Yale University, obtained three measurements in these particular bones

of 71 species of living turtles and tortoises. They used a kind of triangular graph paper to plot the three measurements against one another. All the land tortoise species formed a tight cluster of points in the upper part of the triangle; all the water turtles cluster in the lower part of the triangular graph. There was no overlap, except when they added some species that spend time both in water and on land. Sure enough, these amphibious species show up on the triangular graph approximately half way between the ‘wet cluster’ of sea turtles and the ‘dry cluster’ of land tortoises. The next step was to determine where the fossils fell. The bones of P quenstedti and JR talampayensis leave us in no doubt. Their points on the graph are right in the thick of the dry cluster. Both these fossils were dry-land tortoises. They come from the era before our turtles returned to the water.

You might think, therefore, that modem land tortoises have probably stayed on land ever since those early terrestrial times, as most mammals did after a few of them went back to the sea. But apparently not. If you draw out the family tree of all modem turtles and tortoises, nearly all the branches are aquatic. Today’s land tortoises constitute a single branch, deeply nested among branches consisting of aquatic turtles. This suggests that modem land tortoises have not stayed on land continuously since the time of P. quenstedti and P talampayensis. Rather, their ancestors were among those who went back to the water, and they then re-emerged back onto the land in (relatively) more recent times.

Tortoises therefore represent a remarkable double return. In common with all mammals, reptiles and birds, their remote ancestors were marine fish and before that various more or less worm-like creatures stretching back, still in the sea, to the primeval bacteria. Later ancestors lived on land and stayed there for a very large number of generations. Later ancestors still evolved back into the water and became sea turtles. And finally they returned yet again to the land as tortoises, some of which now live in the driest of deserts.

 

 

Why are so few tigers man-eaters?

As you leave the Bandhavgarh National Park in central India, there is a notice which shows a huge, placid tiger. The notice says, ‘You may not have seen me, but I have seen you.’ There are more than a billion people In India and Indian tigers probably see humans every single day of their lives. Tigers can and do kill almost everything they meet in the jungle, they will kill even attack elephants and rhino. Surely, then, it is a little strange that attacks o humans are not more frequent.

Some people might argue that these attacks were in fact common in the past. British writers of adventure stories, such as Jim Corbett, gave the impression that village life in India in the early years of the twentieth century involved a stage of constant siege by man-eating tigers. But they may have overstated the terror spread by tigers. There were also far more tigers around in those days (probably 60.000 in the subcontinent compared to just 3000 today). So in proportion, attacks appear to have been as rare then as they are today.

It is widely assumed that the constraint is fear; but what exactly are tigers afraid of? Can they really know that we may be even better armed that they are? Surely not. Has the species programmed the experiences of all tigers with humans its genes to be inherited as instinct? Perhaps. But I think the explanation may be more simple and, in a way, more intriguing.

Since the growth of ethology in the 1950s. we have tried to understand animal behaviour from the animal’s point of view. Until the first elegant experiments by pioneers in the field such as Konrad Lorenz, naturalists wrote about animals as if they were slightly less intelligent humans. Jim Corbett’s breathless accounts of his duels with a an-eaters in truth tell us more about Jim Corbett than they do about the animals. The principle of ethology, on the other hand, requires us to attempt to think in the same way as the animal we are studying thinks, and to observe every tiny detail of its behaviour without imposing our own human significances on its actions.

I suspect that a tiger’s afraid of humans lies not in some preprogramed ancestral logic but in the way he actually perceives us visually. If you think like a tiger, a human in a car might appear just to be a part of the car, and because tigers don’t eat cars the human is safe-unless the car is menacing the tiger or its cubs, in which case a brave or enraged tiger may charge. A human on foot is a different sort of puzzle. Imagine a tiger sees a man who is 1.8m tall. A tigeris less than Im tall but they may be up to 3m long from head to tail. So when a tiger sees the man face on, it might not be unreasonable for him to assume that the man is 6m long. If he meet a deer of this size, he might attack the animal by leaping on its back, but when he looks behind the mind he can’t see a back. From the front the man is huge, but looked at from the side he all but disappears. This must be very disconcerting. A hunter has to be confident that it can tackle its prey, and no one is confident when they are disconcerted. This is especially true of a solitary hunter such as the tiger and may explain why lions-particularly young lionesses who tend to encourage one another to take risks are more dangerous than tigers.

If the theory that a tiger is disconcerted to find that a standing human is both very big and yet somehow invisible is correct, the opposite should be true of a squatting human. A squatting human is half he size and presents twice the spread of back, and more closely resembles a medium-sized deer. If tigers were simply frightened of all humans, then a squatting person would be no more attractive as a target than a standing one. This, however appears not to be the case. Many incidents of attacks on people involving villagers squatting or bending over to cut grass for fodder or building material.

The fact that humans stand upright may therefore not just be something that distinguishes them from nearly all other species, but also a factor that helped them to survive in a dangerous and unpredictable environment.

Note:

Ethology =  the branch of zoology that studies the behaviour of animals in their natural habitats

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

The Dolphin

Can dolphins talk? Maybe they can’t talk with words, but they talk with sounds. They show their feelings with sounds. Dolphins travel in a group. We call a group of fish “school.” They don’t study, but they travel together. Dolphins are mammals, not fish, but they swim together in a school.

Dolphin talk to the other dolphins in the school. They give information. They tell when they are happy or sad or afraid. They say “Welcome” when a dolphin come back to the school. They talk when they play.

They make a few sounds above water. They make many more sounds under water. People cannot hear these sounds because the sounds are very, very high. Scientists make tapes of the sounds and study them. Sometimes people catch dolphins for a large aquarium. (An aquarium is a zoo for fish.)

People can watch the dolphins in a show. Dolphins don’t like to be away from their school in an aquarium. They are sad and lonely. There are many stories about dolphins. They help people. Sometimes they save somebody’s life. Dolphin meat is good, but people don’t like to kill them. They say that dolphins bring good luck. Many people believe this.

 

 

So you think humans are unique

There was a time when we thought humans were special in so many ways. Now we know better. We are not the only species that feels emotions, empathises with others or abides by a moral code. Neither are we the only ones with personalities, cultures and the ability to design and use tools. Yet we have steadfastly clung to the notion that one attribute, at least, makes us unique: we alone have the capacity for language. 

Alas, it turns out we are not so special in this respect either. Key to the revolutionary reassessment of our talent for communication is the way we think about language itself. Where once it was seen as a monolith, a discrete and singular entity, today scientists find it is more productive to think of language as a suite of abilities. Viewed this way, it becomes apparent that the component parts of language are not as unique as the whole. 

Take gesture, arguably the starting point for language. Until recently, it was considered uniquely human - but not any more. Mike Tomasello of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and others have compiled a list of gestures observed in monkeys, gibbons, gorillas, chimpanzees, bonobos and orang-utans, which reveals that gesticulation plays a large role in their communication. Ape gestures can involve touch, vocalising or eye movement, and individuals wait until they have another ape’s attention before making visual or auditory gestures. If their gestures go unacknowledged, they will often repeat them or touch the recipient. 

In an experiment carried out in 2006 by Erica Cartmill and Richard Byrne from the University of St Andrews in the UK, they got a person to sit on a chair with some highly desirable food such as banana to one side of them and some bland food such as celery to the other. The orang-utans, who could see the person and the food from their enclosures, gestured at their human partners to encourage them to push the desirable food their way. If the person feigned incomprehension and offered the bland food, the animals would change their gestures - just as humans would in a similar situation. If the human seemed to understand while being somewhat confused, giving only half the preferred food, the apes would repeat and exaggerate their gestures - again in exactly the same way a human would. Such findings highlight the fact that the gestures of non­human primates are not merely innate reflexes but are learned, flexible and under voluntary control - all characteristics that are considered prerequisites for human-like communication. As well as gesturing, pre-linguistic infants babble. At about five months, babies start to make their first speech sounds, which some researchers believe contain a random selection of all the phonemes humans can produce. But as children learn the language of their parents, they narrow their sound repertoire to fit the model to which they are exposed, producing just the sounds of their native language as well as its classic intonation patterns. Indeed, they lose their polymath talents so effectively that they are ultimately unable to produce some sounds - think about the difficulty some speakers have producing the English th. 

Dolphin calves also pass through a babbling phase, Laurance Doyle from the SETI Institute in Mountain View, California, Brenda McCowan from the University of California at Davis and their colleagues analysed the complexity of baby dolphin sounds and found it looked remarkably like that of babbling infants, in that the young dolphins had a much wider repertoire of sound than adults. This suggests that they practise the sounds of their species, much as human babies do, before they begin to put them together in the way characteristic of mature dolphins of their species. 

Of course, language is more than mere sound - it also has meaning. While the traditional, cartoonish version of animal communication renders it unclear, unpredictable and involuntary, it has become clear that various species are able to give meaning to particular sounds by connecting them with specific ideas. Dolphins use 'signature whistles’, so called because it appears that they name themselves. Each develops a unique moniker within the first year of life and uses it whenever it meets another dolphin. 

One of the clearest examples of animals making connections between specific sounds and meanings was demonstrated by Klaus Zuberbuhler and Katie Slocombe of the University of St Andrews in the UK. They noticed that chimps at Edinburgh Zoo appeared to make rudimentary references to objects by using distinct cries when they came across different kinds of food. Highly valued foods such as bread would elicit high- pitched grunts, less appealing ones, such as an apple, got low-pitched grunts. Zuberbuhler and Slocombe showed not only that chimps could make distinctions in the way they vocalised about food, but that other chimps understood what they meant, When played recordings of grunts that were produced for a specific food, the chimps looked in the place where that food was usually found. They also searched longer if the cry had signalled a prized type of food. 

Clearly animals do have greater talents for communication than we realised. Humans are still special, but it is a far more graded, qualified kind of special than it used to be.

 

 

THE KIWI

The Kiwi lives only in New Zealand. It is a very strange bird because it cannot fly. The Kiwi is the same size as a chicken. It has no wings or tails. It does not have any feathers like other birds. It has hair on its body. Each foot has four toes. Its beak is very long. 

The shy, flightless kiwi likes a lot of trees around it. It sleeps during the day because the sunlight hurts its eyes.This nocturnal bird is active only at night. It can smell things with its nose. It is the only bird in the world that can smell things. The kiwi's eggs are very big.

There are only a few kiwis in New Zealand now. A recent survey shows that kiwis are disappearing rapidly from the forests where they have lived for thousands of years. People hardly see them except in a New Zealand Zoo. The kiwi is heading towards extinction. 

In the past millions of kiwis living in the forests. Their only predator during that time was the large New Zealand eagle.This great eagle became extinct many years ago. The rapid disappearance of the kiwis started when man came to settle here. They brought along with them dogs, cats and Australian possums. Besides man, these animals also like to kill and eat kiwis. Thus reducing the numbers of this nocturnal bird. Many people who are concern to conserve the kiwi predict that it may disappear completely in less than 10 years.

To avoid this, the government states that people cannot simply kill kiwis because they are on the endangered species list. New Zealanders want their kiwis to live. The kiwi is very special to the New Zealanders. It has become the country's national symbol as one can see a picture of a kiwi on New Zealand money. Apart from that, people from New Zealand are sometimes called kiwis.

 

 

Issues Affecting the Southern Resident Orcas

Orcas, also known as killer whales, are opportunistic feeders, which means they will take a variety of different prey species. J, K, and L pods (specific groups of orcas found in the region) are almost exclusively fish eaters. Some studies show that up to 90 percent of their diet is salmon, with Chinook salmon being far and away their favorite. During the last 50 years, hundreds of wild runs of salmon have become extinct due to habitat loss and overfishing of wild stocks. Many of the extinct salmon stocks are the winter runs of chinook and coho. Although the surviving stocks have probably been sufficient to sustain the resident pods, many of the runs that have been lost were undoubtedly traditional resources favored by the resident orcas. This may be affecting the whales’ nutrition in the winter and may require them to change their patterns of movement in order to search for food.

Other studies with tagged whales have shown that they regularly dive up to 800 feet in this area.

Researchers tend to think that during these deep dives the whales may be feeding on bottomfish. Bottomfish species in this area would include halibut, rockfish, lingcod, and greenling. Scientists estimate that today’s lingcod population in northern Puget Sound and the Strait of Georgia is only 2 percent of what it was in 1950. The average size of rockfish in the recreational catch has also declined by several inches since the 1970s, which is indicative of overfishing. In some locations, certain rockfish species have disappeared entirely. So even if bottomfish are not a major food resource for the whales, the present low numbers of available fish increases the pressure on orcas and all marine animals to find food. (For more information on bottomfish see the San Juan County Bottomfish Recovery Program.)

Toxic substances accumulate in higher concentrations as they move up the food chain. Because orcas t are the top predator in the ocean and are at the top of several different food chains in the environment, they tend to be more affected by pollutants than other sea creatures. Examinations of stranded killer whales have shown some extremely high levels of lead, mercury, and polychlorinated hydrocarbons. Abandoned marine toxic waste dumps and present levels of industrial and human refuse pollution of the inland waters probably presents the most serious threat to the continued existence of this orca population. Unfortunately, the total remedy to this huge problem would be broad societal changes on many fronts. But because of the fact that orcas are so popular, they may be the best species to use as a focal point in bringing about the many changes that need to be made in order to protect the marine environment as a whole from further toxic poisoning.'

The waters around the San Juan Islands are extremely busy due to international commercial shipping, fishing, whale watching, and pleasure boating. On a busy weekend day in the summer, it is not uncommon to see numerous boats in the vicinity of the whales as they travel through the area. The potential impacts from all this vessel traffic with regard to the whales and other marine animals in the area could be tremendous.

The surfacing and breathing space of marine birds and mammals is a critical aspect of their habitat, which the animals must consciously deal with on a moment-to-moment basis throughout their lifetimes. With all the boating activity in the vicinity, there are three ways in which surface impacts are most likely to affect marine animals: (a) collision, (b) collision avoidance, and (c) exhaust emissions in breathing pockets.

The first two impacts are very obvious and don’t just apply to vessels with motors. Kayakers even present a problem here because they’re so quiet. Marine animals, busy hunting and feeding under the surface of the water, may not be aware that there is a kayak above them and actually hit the bottom of it as they surface to breathe.

The third impact is one most people don’t even think of. When there are numerous boats in the area, especially idling boats, there are a lot of exhaust fumes being spewed out on the surface of the water. When the whale comes up to take a nice big breath of “fresh" air, it instead gets a nice big breath of exhaust fumes. It’s hard to say how greatly this affects the animals, but think how breathing polluted air affects us (i.e., smog in large cities like Los Angeles, breathing the foul air while sitting in traffic jams, etc).

Similar to surface impacts, a primary source of acoustic pollution for this population of orcas would also be derived from the cumulative underwater noise of vessel traffic. For cetaceans, the underwater sound environment is perhaps the most critical component of their sensory and behavioral lives. Orcas communicate with each other over short and long distances with a variety of clicks, chirps, squeaks, and whistles, along with using echolocation to locate prey and to navigate. They may also rely on passive listening as a primary sensory source. The long-term impacts from noise pollution would not likely show up as noticeable behavioral changes in habitat use, but rather as sensory damage or gradual reduction in population health. A new study at The Whale Museum called the SeaSound Remote Sensing Network has begun studying underwater acoustics and its relationship to orca communication.

 

 

Biological control of pests

The continuous and reckless use of synthetic chemicals for the control of pests which pose a threat to agricultural crops and human health is proving to be counter-productive. Apart from engendering widespread ecological disorders, pesticides have contributed to the emergence of a new breed of chemical-resistant, highly lethal superbugs. 

According to a recent study by the Food and Agriculture Organisation (FAO), more than 300 species of agricultural pests have developed resistance to a wide range of potent chemicals. Not to be left behind are the disease-spreading pests, about 100 species of which have become immune to a variety of insecticides now in use.

One glaring disadvantage of pesticides’ application is that, while destroying harmful pests, they also wipe out many useful non-targeted organisms, which keep the growth of the pest population in check. This results in what agroecologists call the ‘treadmill syndrome’. Because of their tremendous breeding potential and genetic diversity, many pests are known to withstand synthetic chemicals and bear offspring with a built-in resistance to pesticides.

The havoc that the ‘treadmill syndrome’ can bring about is well illustrated by what happened to cotton farmers in Central America. In the early 1940s, basking in the glory of chemical-based intensive agriculture, the farmers avidly took to pesticides as a sure measure to boost crop yield. The insecticide was applied eight times a year in the mid-1940s, rising to 28 in a season in the mid-1950s, following the sudden proliferation of three new varieties of chemical-resistant pests.

By the mid-1960s, the situation took an alarming turn with the outbreak of four more new pests, necessitating pesticide spraying to such an extent that 50% of the financial outlay on cotton production was accounted for by pesticides. In the early 1970s, the spraying frequently reached 70 times a season as the farmers were pushed to the wall by the invasion of genetically stronger insect species.

Most of the pesticides in the market today remain inadequately tested for properties that cause cancer and mutations as well as for other adverse effects on health, says a study by United States environmental agencies. The United States National Resource Defense Council has found that DDT was the most popular of a long list of dangerous chemicals in use.

In the face of the escalating perils from indiscriminate applications of pesticides, a more effective and ecologically sound strategy of biological control, involving the selective use of natural enemies of the pest population, is fast gaining popularity - though, as yet, it is a new field with limited potential. The advantage of biological control in contrast to other methods is that it provides a relatively low-cost, perpetual control system with a minimum of detrimental side-effects. When handled by experts, bio-control is safe, non-polluting and self-dispersing.

The Commonwealth Institute of Biological Control (CIBC) in Bangalore, with its global network of research laboratories and field stations, is one of the most active, non-commercial research agencies engaged in pest control by setting natural predators against parasites. CIBC also serves as a clearing-house for the export and import of biological agents for pest control world-wide.

CIBC successfully used a seed-feeding weevil, native to Mexico, to control the obnoxious parthenium weed, known to exert devious influence on agriculture and human health in both India and Australia. Similarly the Hyderabad-based Regional Research Laboratory (RRL), supported by CIBC, is now trying out an Argentinian weevil for the eradication of water hyacinth, another dangerous weed, which has become a nuisance in many parts of the world. According to Mrs Kaiser Jamil of RRL, ‘The Argentinian weevil does not attack any other plant and a pair of adult bugs could destroy the weed in 4-5 days.’ CIBC is also perfecting the technique for breeding parasites that prey on ‘disapene scale’ insects - notorious defoliants of fruit trees in the US and India.

How effectively biological control can be pressed into service is proved by the following examples. In the late 1960s, when Sri Lanka’s flourishing coconut groves were plagued by leaf-mining hispides, a larval parasite imported from Singapore brought the pest under control. A natural predator indigenous to India, Neodumetia sangawani, was found useful in controlling the Rhodes grass-scale insect that was devouring forage grass in many parts of the US. By using Neochetina bruci, a beetle native to Brazil, scientists at Kerala Agricultural University freed a 12-kilometre-long canal from the clutches of the weed Salvinia molesta, popularly called ‘African Payal’ in Kerala. About 30,000 hectares of rice fields in Kerala are infested by this weed.

 

 

Let's Go Bats

Bats have a problem: how to find their way around in the dark.They hunt at night, and cannot use light to help them find prey and avoid obstacles. You might say that this is a problem of their own making, one that they could avoid simply by changing their habits and hunting by day. But the daytime economy is already heavily exploited by other creatures such as birds. Given that there is a living to be made at night, and given that alternative daytime trades are thoroughly occupied, natural selection has favoured bats that make a go of the night-hunting trade. It is probable that the nocturnal trades go way back in the ancestry of all mammals. In the time when the dinosaurs dominated the daytime economy, our mammalian ancestors probably only managed to survive at all because they found ways of scraping a living at night. Only after the mysterious mass extinction of the dinosaurs about 65 million years ago were our ancestors able to emerge into the daylight in any substantial numbers.

Bats have an engineering problem: how to find their way and find their prey in the absence of light. Bats are not the only creatures to face this difficulty today. Obviously the night-flying insects that they prey on must find their way about somehow. Deep-sea fish and whales have little or no light by day or by night. Fish and dolphins that live in extremely muddy water cannot see because, although there is light, it is obstructed and scattered by the dirt in the water Plenty of other modern animals make their living in conditions where seeing is difficult or impossible.

Given the questions of how to manoeuvre in the dark, what solutions might an engineer consider? The first one that might occur to him is to manufacture light, to use a lantern or a searchlight. Fireflies and some fish (usually with the help of bacteria) have the power to manufacture their own light, but the process seems to consume a large amount of energy. Fireflies use their light for attracting mates.This doesn't require a prohibitive amount of energy: a male’s tiny pinprick of light can be seen by a female from some distance on a dark night, since her eyes are exposed directly to the light source itself. However using light to find one's own way around requires vastly more energy, since the eyes have to detect the tiny fraction of the light that bounces off each part of the scene. The light source must therefore be immensely brighter if it is to be used as a headlight to illuminate the path, than if it is to be used as a signal to others. In any event, whether or not the reason is the energy expense, it seems to be the case that, with the possible exception of some weird deep-sea fish, no animal apart from man uses manufactured light to find its way about.

What else might the engineer think of? Well, blind humans sometimes seem to have an uncanny sense of obstacles in their path. It has been given the name 'facial vision’, because blind people have reported that it feels a bit like the sense of touch, on the face. One report tells of a totally blind boy who could ride his tricycle at good speed round the block near his home, using facial vision. Experiments showed that, in fact, facial vision is nothing to do with touch or the front of the face, although the sensation may be referred to the front of the face, like the referred pain in a phantom limb.The sensation of facial vision, it turns out, really goes in through the ears.

Blind people, without even being aware of the fact, are actually using echoes of their own footsteps and of other sounds, to sense the presence of obstacles. Before this was discovered, engineers had already built instruments to exploit the principle, for example to measure the depth of the sea under a ship. After this technique had been invented, it was only a matter of time before weapons designers adapted it for the detection of submarines. Both sides in the Second World War relied heavily on these devices, under such codenames as Asdic (British) and Sonar (American), as well as Radar (American) or RDF (British), which uses radio echoes rather than sound echoes.

The Sonar and Radar pioneers didn't know it then, but all the world now knows that bats, or rather natural selection working on bats, had perfected the system tens of millions of years earlier; and their radar' achieves feats of detection and navigation that would strike an engineer dumb with admiration. It is technically incorrect to talk about bat 'radar', since they do not use radio waves. It is sonar. But the underlying mathematical theories of radar and sonar are very similar; and much of our scientific understanding of the details of what bats are doing has come from applying radar theory to them.The American zoologist Donald Griffin, who was largely responsible for the discovery of sonar in bats, coined the term 'écholocation' to cover both sonar and radar, whether used by animals or by human instruments.

 

 

Humpback whale breaks migration record

A whale surprises researchers with her journey. A lone humpback whale travelled more than 9,800 kilometres from breeding areas in Brazil to those in Madagascar, setting a record for the longest mammal migration ever documented.

Humpback whales (Megaptera novaeangliae) are known to have some of the longest migration distances of all mammals, and this huge journey is about 400 kilometres farther than the previous humpback record. The finding was made by Peter Stevick, a biologist at the College of the Atlantic in Bar Harbor, Maine.

The whale’s journey was unusual not only for its length, but also because it travelled across almost 90 degrees of longitude from west to east. Typically, humpbacks move in a north-south direction between cold feeding areas and warm breeding grounds - and the longest journeys which have been recorded until now have been between breeding and feeding sites.

The whale, a female, was first spotted off the coast of Brazil, where researchers photographed its tail fluke and took skin samples for chromosome testing to determine the animal's sex. Two years later, a tourist on a whale-watching boat snapped a photo of the humpback near Madagascar.

To match the two sightings, Stevick’s team used an extensive international catalogue of photographs of the undersides of tail flukes, which have distinctive markings. Researchers routinely compare the markings in each new photograph to those in the archive.

The scientists then estimated the animal’s shortest possible route: an arc skirting the southern tip of South Africa and heading north-east towards Madagascar. The minimum distance is 9,800 kilometres, says Stevick, but this is likely to be an underestimate, because the whale probably took a detour to feed on krill in the Southern Ocean near Antarctica before reaching its destination.

Most humpback-whale researchers focus their efforts on the Northern Hemisphere because the Southern Ocean near the Antarctic is a hostile environment and it is hard to get to, explains Rochelle Constantine, who studies the ecology of humpback whales at the University of Auckland in New Zealand. But, for whales, oceans in the Southern Hemisphere are wider and easier to travel across, says Constantine. Scientists will probably observe more long-distance migrations in the Southern Hemisphere as satellite tracking becomes increasingly common, she adds.

Daniel Palacios, an oceanographer at the University of Hawaii at Manoa, says that the record-breaking journey could indicate that migration patterns are shifting as populations begin to recover from near-extinction and the population increases. But the reasons why the whale did not follow the usual migration routes remain a mystery. She could have been exploring new habitats, or simply have lost her way. 'We generally think of humpback whales as very well studied, but then they surprise us with things like this,’ Palacios says. ‘Undoubtedly there are a lot of things we still don’t know about whale migration.’

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

Collecting Ant Specimens

Collecting ants can be as simple as picking up stray ones and placing them in a glass jar, or as complicated as completing an exhaustive survey of all species present in an area and estimating their relative abundances. The exact method used will depend on the final purpose of the collections. For taxonomy, or classification, long series, from a single nest, which contain all castes (workers, including majors and minors, and, if present, queens and males) are desirable, to allow the determination of variation within species. For ecological studies, the most important factor is collecting identifiable samples of as many of the different species present as possible. Unfortunately, these methods are not always compatible. The taxonomist sometimes overlooks whole species in favour of those groups currently under study, while the ecologist often collects only a limited number of specimens of each species, thus reducing their value for taxonomic investigations. 

To collect as wide a range of species as possible, several methods must be used. These include hand collecting, using baits to attract the ants, ground litter sampling, and the use of pitfall traps. Hand collecting consists of searching for ants everywhere they are likely to occur. This includes on the ground, under rocks, logs or other objects on the ground, in rotten wood on the ground or on trees, in vegetation, on tree trunks and under bark. When possible, collections should be made from nests or foraging columns and at least 20 to 25 individuals collected. This will ensure that all individuals are of the same species, and so increase their value for detailed studies. Since some species are largely nocturnal, collecting should not be confined to daytime. Specimens are collected using an aspirator (often called a pooter), forceps, a fine, moistened paint brush, or fingers, if the ants are known not to sting. Individual insects are placed in plastic or glass tubes (1.5-3-0 ml capacity for small ants, 5-8 ml for larger ants) containing 75% to 95% ethanol. Plastic tubes with secure tops are better than glass because they are lighter, and do not break as easily if mishandled.

Baits can be used to attract and concentrate foragers. This often increases the number of individuals collected and attracts species that are otherwise elusive. Sugars and meats or oils will attract different species and a range should be utilised. These baits can be placed either on the ground or on the trunks of trees or large shrubs. When placed on the ground, baits should be situated on small paper cards or other flat, light-coloured surfaces, or in test-tubes or vials. This makes it easier to spot ants and to capture them before they can escape into the surrounding leaf litter.

Many ants are small and forage primarily in the layer of leaves and other debris on the ground. Collecting these species by hand can be difficult. One of the most successful ways to collect them is to gather the leaf litter in which they are foraging and extract the ants from it. This is most commonly done by placing leaf litter on a screen over a large funnel, often under some heat. As the leaf litter dries from above, ants (and other animals) move downward and eventually fall out the bottom and are collected in alcohol placed below the funnel. This method works especially well in rain forests and marshy areas. A method of improving the catch when using a funnel is to sift the leaf litter through a coarse screen before placing it above the funnel. This will concentrate the litter and remove larger leaves and twigs. It will also allow more litter to be sampled when using a limited number of funnels.

The pitfall trap is another commonly used tool for collecting ants. A pitfall trap can be any small container placed in the ground with the top level with the surrounding surface and filled with a preservative. Ants are collected when they fall into the trap while foraging.

The diameter of the traps can vary from about 18 mm to 10 cm and the number used can vary from a few to several hundred. The size of the traps used is influenced largely by personal preference (although larger sizes are generally better), while the number will be determined by the study being undertaken. The preservative used is usually ethylene glycol or propylene glycol, as alcohol will evaporate quickly and the traps will dry out.

One advantage of pitfall traps is that they can be used to collect over a period of time with minimal maintenance and intervention. One disadvantage is that some species are not collected as they either avoid the traps or do not commonly encounter them while foraging.

 

 

The Future of fish

The face of the ocean has changed completely since the first commercial fishers cast their nets and hooks over a thousand years ago. Fisheries intensified over the centuries, but even by the nineteenth century it was still felt, justifiably, that the plentiful resources of the sea were for the most part beyond the reach of fishing, and so there was little need to restrict fishing or create protected areas. The twentieth century heralded an escalation in fishing intensity that is unprecedented in the history of the oceans, and modern fishing technologies leave fish no place to hide. Today, the only refuges from fishing are those we deliberately create. Unhappily, the sea trails far behind the land in terms of the area and the quality of protection given.

For centuries, as fishing and commerce have expanded, we have held onto the notion that the sea is different from the land. We still view it as a place where people and nations should be free to come and go at will, as well as somewhere that should be free for us to exploit. Perhaps this is why we have been so reluctant to protect the sea. On land, protected areas have proliferated as human populations have grown. Here, compared to the sea, we have made greater headway in our struggle to maintain the richness and variety of wildlife and landscape. Twelve percent of the world’s land is now contained in protected areas, whereas the corresponding figure for the sea is but three-fifths of one percent. Worse still, most marine protected areas allow some fishing to continue. Areas off-limits to all exploitation cover something like one five-thousandth of the total area of the world’s seas.

Today, we are belatedly coming to realise that ‘natural refuges’ from fishing have played a critical role in sustaining fisheries, and maintaining healthy and diverse marine ecosystems. This does not mean that marine reserves can rebuild fisheries on their own - other management measures are also required for that. However, places that are off-limits to fishing constitute the last and most important part of our package of reform for fisheries management. They underpin and enhance all our other efforts. There are limits to protection though.

Reserves cannot bring back what has died out. We can never resurrect globally extinct species, and restoring locally extinct animals may require reintroductions from elsewhere, if natural dispersal from remaining populations is insufficient. We are also seeing, in cases such as northern cod in Canada, that fishing can shift marine ecosystems into different states, where different mixes of species prevail. In many cases, these species are less desirable, since the prime fishing targets have gone or are much reduced in numbers, and changes may be difficult to reverse, even with a complete moratorium on fishing. The Mediterranean sailed by Ulysses, the legendary king of ancient Greece, supported abundant monk seals, loggerhead turtles and porpoises. Their disappearance through hunting and overfishing has totally restructured food webs, and recovery is likely to be much harder to achieve than their destruction was. This means that the sooner we act to protect marine life, the more certain will be our success.

To some people, creating marine reserves is an admission of failure. According to their logic, reserves should not be necessary if we have done our work properly in managing the uses we make of the sea. Many fisheries managers are still wedded to the idea that one day their models will work, and politicians will listen to their advice. Just

Highlight Highlight Highlight|Remove Highlight|Dictionary

Neanderthals and modern humans

The evolutionary processes that have made modern humans so different from other animals are hard to determine without an ability to examine human species that have not achieved similar things. However, in a scientific masterpiece, Svante Paabo and his colleagues from the Max Planck Institute for Evolutionary Anthropology, in Leipzig, have made such a comparison possible. In 2009, at a meeting of the American Association for the Advancement of Science, they made public an analysis of the genome [1] of Neanderthal man.

Homo neanderthalensis, to give its proper name, lived in Europe and parts of Asia from 400,000 years ago to 30,000 years ago. Towards the end of this period it shared its range with interlopers in the form of Homo sapiens [2], who were spreading out from Africa. However, the two species did not settle down to a stable cohabitation. For reasons which are as yet unknown, the arrival of Homo sapiens in a region was always quickly followed by the disappearance of Neanderthals.

Before 2009, Dr Paabo and his team had conducted only a superficial comparison between the DNA of Neanderthals and modern humans. Since then, they have performed a more thorough study and, in doing so, have shed a fascinating light on the intertwined history of the two species. That history turns out to be more intertwined than many had previously believed.

Dr Paabo and his colleagues compared their Neanderthal genome (painstakingly reconstructed from three bone samples collected from a cave in Croatia) with that of five living humans from various parts of Africa and Eurasia. Previous genetic analysis, which had only examined DNA passed from mother to child in cellular structures called mitochondria, had suggested no interbreeding between Neanderthals and modern humans. The new, more extensive examination, which looks at DNA in the cell nucleus rather than in the mitochondria, shows this conclusion is wrong. By comparing the DNA in the cell nucleus of Africans (whose ancestors could not have crossbred with Neanderthals, since they did not overlap with them) and various Eurasians (whose ancestors could have crossbred with Neanderthals), Dr Paabo has shown that Eurasians are between one percent and four percent Neanderthal.

That is intriguing. It shows that even after several hundred thousand years of separation, the two species were inter-fertile. It is strange, though, that no Neanderthal mitochondrial DNA has turned up in modern humans, since the usual pattern of invasion in historical times was for the invaders’ males to mate with the invaded’s females. One piece of self-knowledge, then - at least for non-Africans - is that they have a dash of Neanderthal in them. But Dr Paabo’s work also illuminates the differences between the species. By comparing modem humans, Neanderthals, and chimpanzees, it is possible to distinguish genetic changes which are shared by several species of human in their evolution away from the great-ape lineage, from those which are unique to Homo sapiens.

More than 90 percent of the ‘human accelerated regions’ [3] that have been identified in modem people are found in Neanderthals too. However, the rest are not. Dr Paabo has identified 212 parts of the genome that seem to have undergone significant evolution since the species split. The state of genome science is still quite primitive, and it is often unclear what any given bit of DNA is actually doing. But an examination of the 20 largest regions of DNA that have evolved in this way shows that they include several genes which are associated with cognitive ability, and whose malfunction causes serious mental problems. These genes therefore look like good places to start the search for modern humanity’s essence.

The newly evolved regions of DNA also include a gene called RUNX2, which controls bone growth. That may account for differences in the shape of the skull and the rib cage between the two species. By contrast an earlier phase of the study had already shown that Neanderthals and moderns share the same version of a gene called FOXP2, which is involved in the ability to speak, and which differs in chimpanzees. It is all, then, very promising - and a second coup in quick succession for Dr Paabo. Another of his teams has revealed the existence of a hitherto unsuspected species of human, using mitochondrial DNA found in a little-finger bone. If that species, too, could have its full genome read, humanity’s ability to know itself would be enhanced even further.

[1] an individual’s complete set of genes

[2] the scientific name for modem humans

[3] parts of the human brain which evolved very rapidly

 

 

Tea and the Industrial Revolution

A Cambridge professor says that a change in drinking babits was the reason for the Industrial Revolution in Britain. Anjana Abuja reports 

Alan Macfarlane, professor of anthropological science at King’s College, Cambridge has, like other historians, spent decades wrestling with the enigma of the Industrial Revolution. Why did this particular Big Bang – the world-changing birth of industry-happen in Britain? And why did it strike at the end of the 18th century?

Macfarlane compares the puzzle to a combination lock. ‘There are about 20 different factors and all of them need to be present before the revolution can happen,’ he says. For industry to take off, there needs to be the technology and power to drive factories, large urban populations to provide cheap labour, easy transport to move goods around, an affluent middle-class willing to buy mass-produced objects, a market-driven economy and a political system that allows this to happen. While this was the case for England, other nations, such as Japan, the Netherlands and France also met some of these criteria but were not industrialising. All these factors must have been necessary. But not sufficient to cause the revolution, says Macfarlane. ‘After all, Holland had everything except coal while China also had many of these factors. Most historians are convinced there are one or two missing factors that you need to open the lock.’

The missing factors, he proposes, are to be found in almost even kitchen cupboard. Tea and beer, two of the nation’s favourite drinks, fuelled the revolution. The antiseptic properties of tannin, the active ingredient in tea, and of hops in beer – plus the fact that both are made with boiled water – allowed urban communities to flourish at close quarters without succumbing to water-borne diseases such as dysentery. The theory sounds eccentric but once he starts to explain the detective work that went into his deduction, the scepticism gives way to wary admiration. Macfarlanes case has been strengthened by support from notable quarters – Roy Porter, the distinguished medical historian, recently wrote a favourable appraisal of his research.

Macfarlane had wondered for a long time how the Industrial Revolution came about. Historians had alighted on one interesting factor around the mid-18th century that required explanation. Between about 1650 and 1740,the population in Britain was static. But then there was a burst in population growth. Macfarlane says: ‘The infant mortality rate halved in the space of 20 years, and this happened in both rural areas and cities, and across all classes. People suggested four possible causes. Was there a sudden change in the viruses and bacteria around? Unlikely. Was there a revolution in medical science? But this was a century before Lister’s revolution*. Was there a change in environmental conditions? There were improvements in agriculture that wiped out malaria, but these were small gains. Sanitation did not become widespread until the 19th century. The only option left is food. But the height and weight statistics show a decline. So the food must have got worse. Efforts to explain this sudden reduction in child deaths appeared to draw a blank.’

This population burst seemed to happen at just the right time to provide labour for the Industrial Revolution. ‘When you start moving towards an industrial revolution, it is economically efficient to have people living close together,’  says Macfarlane. ‘But then you get disease, particularly from human waste.’ Some digging around in historical records revealed that there was a change in the incidence of water-borne disease at that time, especially dysentery. Macfarlane deduced that whatever the British were drinking must have been important in regulating disease. He says, ‘We drank beer. For a long time, the English were protected by the strong antibacterial agent in hops, which were added to help preserve the beer. But in the late 17th century a tax was introduced on malt, the basic ingredient of beer. The poor turned to water and gin and in the 1720s the mortality rate began to rise again. Then it suddenly dropped again. What caused this?’

Macfarlane looked to Japan, which was also developing large cities about the same time, and also had no sanitation. Water-borne diseases had a much looser grip on the Japanese population than those in Britain. Could it be the prevalence of tea in their culture? Macfarlane then noted that the history of tea in Britain provided an extraordinary coincidence of dates. Tea was relatively expensive until Britain started a direct dipper trade with China in the early 18th century. By the 1740s, about the time that infant mortality was dipping, the drink was common. Macfarlane guessed that the fact that water had to be boiled, together with the stomach-purifying properties of tea meant that the breast milk provided by mothers was healthier than it had ever been. No other European nation sipped tea like the British, which, by Macfarlanes logic, pushed these other countries out of contention for the revolution.

But, if tea is a factor in the combination lock, why didn’t Japan forge ahead in a tea-soaked industrial revolution of its own? Macfarlane notes that even though 17th-century Japan had large cities, high literacy rates, even a futures market, it had turned its back on the essence of any work-based revolution by giving up labour-saving devices such as animals, afraid that they would put people out of work. So, the nation that we now think of as one of the most technologically advanced entered the 19th century having ‘abandoned the wheel’.

 

 

THE LITTLE ICE AGE

This book will provide a detailed examination of the Little Ice Age and other climatic shifts, but, before I embark on that, let me provide a historical context. We tend to think of climate - as opposed to weather - as something unchanging, yet humanity has been at the mercy of climate change for its entire existence, with at least eight glacial episodes in the past 730,000 years. Our ancestors adapted to the universal but irregular global warming since the end of the last great Ice Age, around 10,000 years ago, with dazzling opportunism. They developed strategies for surviving harsh drought cycles, decades of heavy rainfall or unaccustomed cold; adopted agriculture and stock-raising, which revolutionised human life; and founded the world’s first pre-industrial civilisations in Egypt, Mesopotamia and the Americas. But the price of sudden climate change, in famine, disease and suffering, was often high.

The Little Ice Age lasted from roughly 1300 until the middle of the nineteenth century. Only two centuries ago, Europe experienced a cycle of bitterly cold winters; mountain glaciers in the Swiss Alps were the lowest in recorded memory, and pack ice surrounded Iceland for much of the year. The climatic events of the Little Ice Age did more than help shape the modern world. They are the deeply important context for the current unprecedented global warming. The Little Ice Age was far from a deep freeze, however; rather an irregular seesaw of rapid climatic shifts, few lasting more than a quarter-century, driven by complex and still little understood interactions between the atmosphere and the ocean. The seesaw brought cycles of intensely cold winters and easterly winds, then switched abruptly to.years of heavy spring and early summer rains, mild winters, and frequent Atlantic storms, or to periods of droughts, light northeasterly winds, and summer heat waves.

Reconstructing the climate changes of the past is extremely difficult, because systematic weather observations began only a few centuries ago, in Europe and North America. Records from India and tropical Africa are even more recent. For the time before records began, we have only ‘proxy records’ reconstructed largely from tree rings and ice cores, supplemented by a few incomplete written accounts. We now have hundreds of tree-ring records from throughout the northern hemisphere, and many from south of the equator, too, amplified with a growing body of temperature data from ice cores drilled in Antarctica, Greenland, the Peruvian Andes, and other locations. We are close to a knowledge of annual summer and winter temperature variations over much of the northern hemisphere going back 600 years.

 

This book is a narrative history of climatic shifts during the past ten centuries, and some of the ways in which people in Europe adapted to them. Part One describes the Medieval Warm Period, roughly 900 to 1200. During these three centuries, Norse voyagers from Northern Europe explored northern seas, settled Greenland, and visited North America. It was not a time of uniform warmth, for then, as always since the Great Ice Age, there were constant shifts in rainfall and temperature. Mean European temperatures were about the same as today, perhaps slightly cooler.

It is known that the Little Ice Age cooling began in Greenland and the Arctic in about 1200. As the Arctic ice pack spread southward, Norse voyages to the west were rerouted into the open Atlantic, then ended altogether. Storminess increased in the North Atlantic and North Sea. Colder, much wetter weather descended on Europe between 1315 and 1319, when thousands perished in a continent-wide famine. By 1400, the weather had become decidedly more unpredictable and stormier, with sudden shifts and lower temperatures that culminated in the cold decades of the late sixteenth century. Fish were a vital commodity in growing towns and cities, where food supplies were a constant concern. Dried cod and herring were already the staples of the European fish trade, but changes in water temperatures forced fishing fleets to work further offshore. The Basques, Dutch, and English developed the first offshore fishing boats adapted to a colder and stormier Atlantic. A gradual agricultural revolution in northern Europe stemmed from concerns over food supplies at a time of rising populations. The revolution involved intensive commercial farming and the growing of animal fodder on land not previously used for crops. The increased productivity from farmland made some countries self-sufficient in grain and livestock and offered effective protection against famine.

Global temperatures began to rise slowly after 1850, with the beginning of the Modern Warm Period. There was a vast migration from Europe by land-hungry farmers and others, to which the famine caused by the Irish potato blight contributed, to North America, Australia, New Zealand, and southern Africa. Millions of hectares of forest and woodland fell before the newcomers’ axes between 1850 and 1890, as intensive European farming methods expanded across the world. The unprecedented land clearance released vast quantities of carbon dioxide into the atmosphere, triggering for the first time humanly caused global warming. Temperatures climbed more rapidly in the twentieth century as the use of fossil fuels proliferated and greenhouse gas levels continued to soar. The rise has been even steeper since the early 1980s. The Little Ice Age has given way to a new climatic regime, marked by prolonged and steady warming. At the same time, extreme weather events like Category 5 hurricanes are becoming more frequent.

Learning lessons from the past

Many past societies collapsed or vanished, leaving behind monumental ruins such as those that the poet Shelley imagined in his sonnet, Ozymandias. By collapse, I mean a drastic decrease in human population size and/or political/economic/social complexity, over a considerable area, for an extended time. By those standards, most people would consider the following past societies to have been famous victims of full-fledged collapses rather than of just minor declines: the Anasazi and Cahokia within the boundaries of the modern US, the Maya cities in Central America, Moche and Tiwanaku societies in South America, Norse Greenland, Mycenean Greece and Minoan Crete in Europe, Great Zimbabwe in Africa, Angkor Wat and the Harappan Indus Valley cities in Asia, and Easter Island in the Pacific Ocean.

The monumental ruins left behind by those past societies hold a fascination for all of us. We marvel at them when as children we first learn of them through pictures. When we grow up, many of us plan vacations in order to experience them at first hand. We feel drawn to their often spectacular and haunting beauty, and also to the mysteries that they pose. The scales of the ruins testify to the former wealth and power of their builders. Yet these builders vanished, abandoning the great structures that they had created at such effort. How could a society that was once so mighty end up collapsing?

It has long been suspected that many of those mysterious abandonments were at least partly triggered by ecological problems: people inadvertently destroying the environmental resources on which their societies depended. This suspicion of unintended ecological suicide (ecocide) has been confirmed by discoveries made in recent decades by archaeologists, climatologists, historians, paleontologists, and palynologists (pollen scientists). The processes through which past societies have undermined themselves by damaging their environments fall into eight categories, whose relative importance differs from case to case: deforestation and habitat destruction, soil problems, water management problems, overhunting, overfishing, effects of introduced species on native species, human population growth, and increased impact of people.

Those past collapses tended to follow somewhat similar courses constituting variations on a theme. Writers find it tempting to draw analogies between the course of human societies and the course of individual human lives - to talk of a society’s birth, growth, peak, old age and eventual death. But that metaphor proves erroneous for many past societies: they declined rapidly after reaching peak numbers and power, and those rapid declines must have come as a surprise and shock to their citizens. Obviously, too, this trajectory is not one that all past societies followed unvaryingly to completion: different societies collapsed to different degrees and in somewhat different ways, while many societies did not collapse at all.

Today many people feel that environmental problems overshadow all the other threats to global civilisation. These environmental problems include the same eight that undermined past societies, plus four new ones: human-caused climate change, build up of toxic chemicals in the environment, energy shortages, and full human utilisation of the Earth’s photosynthetic capacity. But the seriousness of these current environmental problems is vigorously debated. Are the risks greatly exaggerated, or conversely are they underestimated? Will modern technology solve our problems, or is it creating new problems faster than it solves old ones? When we deplete one resource (e.g. wood, oil, or ocean fish), can we count on being able to substitute some new resource (e.g. plastics, wind and solar energy, or farmed fish)? Isn’t the rate of human population growth declining, such that we’re already on course for the world’s population to level off at some manageable number of people?

Questions like this illustrate why those famous collapses of past civilisations have taken on more meaning than just that of a romantic mystery. Perhaps there are some practical lessons that we could learn from all those past collapses. But there are also differences between the modern world and its problems, and those past societies and their problems. We shouldn't be so naive as to think that study of the past will yield simple solutions, directly transferable to our societies today. We differ from past societies in some respects that put us at lower risk than them; some of those respects often mentioned include our powerful technology (i.e. its beneficial effects), globalisation, modern medicine, and greater knowledge of past societies and of distant modern societies. We also differ from past societies in some respects that put us at greater risk than them: again, our potent technology (i.e., its unintended destructive effects), globalisation (such that now a problem in one part of the world affects all the rest), the dependence of millions of us on modern medicine for our survival, and our much larger human population. Perhaps we can still learn from the past, but only if we think carefully about its lessons.

Buy Nothing Day

“Buy Nothing Day” began in the 1990s in Vancouver, Canada. It was the idea of a man named Kalle Lasn and his organization Adbusters. Before starting Adbusters, Lasn worked for many years in advertising. He helped companies research what influenced people to buy things. But Lasn began to question the ways advertisers influenced people to buy things. He also questioned the culture of buying. Was it good to make people feel like they should always want more and more? “Buy Nothing Day” criticizes this culture of consumerism.

Lasn recognizes that people need to consume things. They have to buys things to eat, live and even enjoy life. But Lasn believes that many companies encourage people to consume far more than is necessary. Advertising this way helps companies make money. But Lasn believes it hurts people and culture.

So Lasn decided to use advertising against companies. Adbusters tries to help people understand some of the false values and ideas behind advertising. The main value Adbusters fights is the idea, "You must consume more to be happy." And one way they do this is by encouraging people to celebrate “Buy Nothing Day!”

“Buy Nothing Day” is celebrated on the fourth Friday of every November. Adbusters chose this day for a very important reason. It is the biggest buying day of the year. Advertisers call this day Black Friday.

Black Friday is particularly famous in the United States. It is the day after the country’s Thanksgiving holiday. On Thanksgiving, people in the United States gather with family and friends to eat a meal and give thanks. In recent years, stores began to reduce their prices the day after Thanksgiving. They wanted to encourage people to start buying gifts for the Christmas holiday in December.

However, in recent years, Black Friday has become famous for something else: greed and violence. On Black Friday stores offer extremely reduced prices. But they only offer limited amounts of product. So, people come early in the morning - or even the night before - to stand in lines outside stores. Sometimes, people push or fight to be first into the store. Some people have even died in Black Friday riots!

“Buy Nothing Day” hopes to end the greed and violence of Black Friday. But its message is bigger than just Black Friday. “Buy Nothing Day” is for people in the United States and around the world. Many other countries also have growing problems with too much consumption. Sixty-two different countries, from Germany to Japan, already celebrate “Buy Nothing Day”. And the message is the same everywhere - buying too much hurts people, culture and the planet. 

Buy Nothing Day is a simple idea. It fights consumer culture by asking us to stop buying for a day. Anyone can do it if they spend a day without buying. For some people, “Buy Nothing Day” is a protest. For other people, it is a party. Some groups go to stores and encourage other people not to buy things. Other people gather together to make Christmas gifts - instead of buying them. And some people use the day to create works of art that protest against consumer messages. Often, people celebrate by enjoying the free gift of nature. They go for walks, or watch the sun set together. The only rule of “Buy Nothing Day” is not to buy anything!

Some people question if “Buy Nothing Day” can really change culture. It is only one day. And telling people not to do something often does not work! Other people say that consumers should not just buy less, but they should buy better. These people encourage consumers to buy things that are made in ways that do not hurt people or the environment.

But “Buy Nothing Day” does get people thinking about the negative effects of buying too much. A lot of people have had deep learning experiences when they tried celebrating “Buy Nothing Day”. It was like giving up an addiction to drugs.

Buying more and more things can be like an addiction. Often, the more people buy things, the more things they want. People are happier and more satisfied when they spend money on experiences instead of things. Satisfaction over purchases decreases over time. A new car does not stay new for very long. But a satisfying experience often becomes more positive over time as we remember it.

 

 

Geoff Brash

Geoff Brash, who died in 2010, was a gregarious Australian businessman and philanthropist who encouraged the young to reach their potential.

Born in Melbourne to Elsa and Alfred Brash, he was educated at Scotch College. His sister, Barbara, became a renowned artist and printmaker. His father, Alfred, ran the Brash retail music business that had been founded in 1862 by his grandfather, the German immigrant Marcus Brasch, specialising in pianos. It carried the slogan ‘A home is not a home without a piano.’

In his young days, Brash enjoyed the good life, playing golf and sailing, and spending some months travelling through Europe, having a leisurely holiday. He worked for a time at Myer department stores before joining the family business in 1949, where he quickly began to put his stamp on things. In one of his first management decisions, he diverged from his father’s sense of frugal aesthetics by re-carpeting the old man’s office while he was away. After initially complaining of his extravagance, his father grew to accept the change and gave his son increasing responsibility in the business.

After World War II (1939-1945), Brash’s had begun to focus on white goods, such as washing machines and refrigerators, as the consumer boom took hold. However, while his father was content with the business he had built, the younger Brash viewed expansion as vital. When Geoff Brash took over as managing director in 1957, the company had two stores, but after floating it on the stock exchange the following year, he expanded rapidly and opened suburban stores, as well as buying into familiar music industry names such as Allans, Palings and Suttons. Eventually, 170 stores traded across the continent under the Brash’s banner.

Geoff Brash learned from his father’s focus on customer service. Alfred Brash had also been a pioneer in introducing a share scheme for his staff, and his son retained and expanded the plan following the float.

Geoff Brash was optimistic and outward looking. As a result, he was a pioneer in both accessing and selling new technology, and developing overseas relationships. He sourced and sold electric guitars, organs, and a range of other modern instruments, as well as state-of-the-art audio and video equipment. He developed a relationship with Taro Kakehashi, the founder of Japan’s Roland group, which led to a joint venture that brought electronic musical devices to Australia.

In 1965, Brash and his wife attended a trade fair in Guangzhou, the first of its kind in China; they were one of the first Western business people allowed into the country following Mao Zedong’s Cultural Revolution. He returned there many times, helping advise the Chinese in establishing a high quality piano factory in Beijing; he became the factory’s agent in Australia. Brash also took leading jazz musicians Don Burrows and James Morrison to China, on a trip that reintroduced jazz to many Chinese musicians.

He stood down as Executive Chairman of Brash’s in 1988, but under the new management debt became a problem, and in 1994 the banks called in administrators. The company was sold to Singaporean interests and continued to trade until 1998, when it again went into administration. The Brash name then disappeared from the retail world. Brash was greatly disappointed by the collapse and the eventual disappearance of the company he had run for so long. But it was not long before he invested in a restructured Allan’s music business.

Brash was a committed philanthropist who, in the mid-1980s, established the Brash Foundation, which eventually morphed, with other partners, into the Soundhouse Music Alliance. This was a not-for-profit organisation overseeing and promoting multimedia music making and education for teachers and students. The Soundhouse offers teachers and young people the opportunity to get exposure to the latest music technology, and to use this to compose and record their own music, either alone or in collaboration. The organisation has now also established branches in New Zealand, South Africa and Ireland, as well as numerous sites around Australia.

 

 

Delivering The Goods

The vast expansion in international trade owes much to a revolution in the business of moving freight 

International trade is growing at a startling pace. While the global economy has been expanding at a bit over 3% a year, the volume of trade has been rising at a compound annual rate of about twice that. Foreign products, from meat to machinery, play a more important role in almost every economy in the world, and foreign markets now tempt businesses that never much worried about sales beyond their nation's borders.

What lies behind this explosion in international commerce? The general worldwide decline in trade barriers, such as customs duties and import quotas, is surely one explanation. The economic opening of countries that have traditionally been minor players is another. But one force behind the import-export boom has passed all but unnoticed: the rapidly falling cost of getting goods to market. Theoretically, in the world of trade, shipping costs do not matter. Goods, once they have been made, are assumed to move instantly and at no cost from place to place. The real world, however, is full of frictions. Cheap labour may make Chinese clothing competitive in America, but if delays in shipment lie up working capital and cause winter coats to arrive in spring, trade may lose its advantages.

At the turn of the 20th century, agriculture and manufacturing were the two most important sectors almost everywhere, accounting for about 70% of total output in Germany, Italy and France, and 40-50% in America, Britain and Japan. International commerce was therefore dominated by raw materials, such as wheat, wood and iron ore, or processed commodities, such as meat and steel. But these sorts of products are heavy and bulky and the cost of transporting them relatively high.

Countries still trade disproportionately with their geographic neighbours. Over time, however, world output has shitted into goods whose worth is unrelated to their size and weight. Today, it is finished manufactured products that dominate the flow of trade, and, thanks to technological advances such as lightweight components, manufactured goods themselves have tended to become lighter and less bulky. As a result, less transportation is required for every dollar's worth of imports or exports.

To see how this influences trade, consider the business of making disk drives for computers. Most of the world's disk-drive manufacturing is concentrated in South-east Asia. This is possible only because disk drives, while valuable, are small and light and so cost little to ship. Computer manufacturers in Japan or Texas will not face hugely bigger freight bills if they import drives from Singapore rather than purchasing them on the domestic market. Distance therefore poses no obstacle to the globalisation of the disk-drive industry.

This is even more true of the fast-growing information industries. Films and compact discs cost little to transport, even by aeroplane. Computer software can be 'exported' without ever loading it onto a ship, simply by transmitting it over telephone lines from one country to another, so freight rates and cargo-handling schedules become insignificant factors in deciding where to make the product. Businesses can locate based on other considerations, such as the availability of labour, while worrying less about the cost of delivering their output.

In many countries deregulation has helped to drive the process along. But, behind the scenes, a series of technological innovations known broadly as containerisation and intermodal transportation has led to swift productivity improvements in cargo-handling. Forty years ago, the process of exporting or importing involved a great many stages of handling, which risked portions of the shipment being damaged or stolen along the way. The invention of the container crane made it possible to load and unload containers without capsizing the ship and the adoption of standard container sizes allowed almost any box to be transported on any ship. By 1967, dual-purpose ships, carrying loose cargo in the hold* and containers on the deck, were giving way to all-container vessels that moved thousands of boxes at a time.

The shipping container transformed ocean shipping into a highly efficient, intensely competitive business. But getting the cargo to and from the dock was a different story. National governments, by and large, kept a much firmer hand on truck and railroad tariffs than on charges for ocean freight. This started changing, however, in the mid-1970s, when America began to deregulate its transportation industry. First airlines, then road hauliers and railways, were freed from restrictions on what they could carry, where they could haul it and what price they could charge. Big productivity gains resulted. Between 1985 and 1996, for example, America's freight railways dramatically reduced their employment, trackage, and their fleets of locomotives - while increasing the amount of cargo they hauled. Europe's railways have also shown marked, albeit smaller, productivity improvements.

In America the period of huge productivity gains in transportation may be almost over, but in most countries the process still has far to go. State ownership of railways and airlines, regulation of freight rates and toleration of anti-competitive practices, such as cargo-handling monopolies, all keep the cost of shipping unnecessarily high and deter international trade. Bringing these barriers down would help the world’s economies grow even closer.

* hold: ship's storage area below deck

 

 

 

Change in business organisations

The forces that operate to bring about change in organisations can be thought of as winds which are many and varied - from small summer breezes that merely disturb a few papers, to mighty howling gales which cause devastation to structures and operations, causing consequent reorientation of purpose and rebuilding. Sometimes, however, the winds die down to give periods of relative calm, periods of relative organisational stability. Such a period was the agricultural age, which Goodman (1995) maintains prevailed in Europe and western societies as a whole until the early 1700s. During this period, wealth was created in the context of an agriculturally based society influenced mainly by local markets (both customer and labour) and factors outside people’s control, such as the weather. During this time, people could fairly well predict the cycle of activities required to maintain life, even if that life might be at little more than subsistence level.

To maintain the meteorological metaphor, stronger winds of change blew to bring in the Industrial Revolution and the industrial age. Again, according to Goodman, this lasted for a long time, until around 1945. It was characterised by a series of inventions and innovations that reduced the number of people needed to work the land and, in turn, provided the means of production of hitherto rarely obtainable goods; for organisations, supplying these in ever increasing numbers became the aim. To a large extent, demand and supply were predictable, enabling . companies to structure their organisations along what Burns and Stalker (1966) described as mechanistic lines, that is as systems of strict hierarchical structures and firm means of control.

This situation prevailed for some time, with demand still coming mainly from the domestic market and organisations striving to fill the ‘supply gap’. Thus the most disturbing environmental influence on organisations of this time was the demand for products, which outstripped supply. The saying attributed to Henry Ford that ‘You can have any colour of car so long as it is black’, gives a flavour of the supply-led state of the market. Apart from any technical difficulties of producing different colours of car, Ford did not have to worry about customers’ colour preferences: he could sell all that he made. Organisations of this period can be regarded as ‘task-oriented’, with effort being put into increasing production through more effective and efficient production processes.

As time passed, this favourable period for organisations began to decline. In the neo-industrial age, people became more discriminating in the goods and services they wished to buy and, as technological advancements brought about increased productivity, supply overtook demand. Companies began, increasingly, to look abroad for additional markets.

At the same time, organisations faced more intensive competition from abroad for their own products and services. In the West, this development was accompanied by a shift in focus from manufacturing to service, whether this merely added value to manufactured products, or whether it was service in-its own right. In the neo-industrial age of western countries, the emphasis moved towards adding value to goods and services - what Goodman calls the value-oriented time, as contrasted with the task- oriented and products/services-oriented times of the past.

Today, in the post-industrial age, most people agree that organisational life is becoming ever more uncertain, as the pace of change quickens and the future becomes less predictable. Writing in 1999, Nadler and Tushman, two US academics, said: ‘Poised on the eve of the next century, we are witnessing a profound transformation in the very nature of our business organisations. Historic forces have converged to fundamentally reshape the scope, strategies, and structures of large enterprises.’ At a less general level of analysis, Graeme Leach, Chief Economist at the British Institute of Directors, claimed in the Guardian newspaper (2000) that: ‘By 2020, the nine-to-five rat race will be extinct and present levels of self-employment, commuting and technology use, as well as age and sex gaps, will have changed beyond recognition.’ According to the article, Leach anticipates that: ‘In 20 years time, 20-25 percent of the workforce will be temporary workers and many more will be flexible, ... 25 percent of people will no longer work in a traditional office and ... 50 percent will work from home in some form.’ Continuing to use the ‘winds of change’ metaphor, the expectation's of damaging gale-force winds bringing the need for rebuilding that takes the opportunity to incorporate new ideas and ways of doing things.

Whether all this will happen is arguable. Forecasting the future is always fraught with difficulties. For instance, Mannermann (1998) sees future studies as part art and part science and notes: ‘The future is full of surprises, uncertainty, trends and trend breaks, irrationality and rationality, and it is changing and escaping from our hands as time goes by. It is also the result of actions made by innumerable more or less powerful forces.’ What seems certain is that the organisational world is changing at a fast rate - even if the direction of change is not always predictable. Consequently, it is crucial that organisational managers and decision makers are aware of, and able to analyse the factors which trigger organisational change.

 

Space: The Final Archaeological Frontier

Space travel may still have a long way to go, but the notion of archaeological research and heritage management in space is already concerning scientists and environmentalists.
 

In 1993, University of Hawaii’s anthropologist Ben Finney, who for much of his career has studied the technology once used by Polynesians to colonize islands in the Pacific, suggested that it would not be premature to begin thinking about the archaeology of Russian and American aerospace sites on the Moon and Mars. Finney pointed out that just as todays scholars use archaeological records to investigate how Polynesians diverged culturally as they explored the Pacific, archaeologists will someday study off-Earth sites to trace the development of humans in space. He realized that it was unlikely anyone would be able to conduct fieldwork in the near future, but he was convinced that one day such work would be done.

There is a growing awareness, however, that it won’t be long before both corporate adventurers and space tourists reach the Moon and Mars. There is a wealth of important archaeological sites from the history of space exploration on the Moon and Mars and measures need to be taken to protect these sites. In addition to the threat from profit- seeking corporations, scholars cite other potentially destructive forces such as souvenir hunting and unmonitored scientific sampling, as has already occurred in explorations of remote polar regions. Already in 1999 one company was proposing a robotic lunar rover mission beginning at the site of Tranquility Base and rumbling across the Moon from one archaeological site to another, from the wreck of the Ranger S probe to Apollo 17 s landing site. The mission, which would leave vehicle tyre- marks all over some of the most famous sites on the Moon, was promoted as a form of theme-park entertainment.

According to the vaguely worded United Motions Outer Space Treaty of 1967. what it terms ‘space junk’ remains the property of the country that sent the craft or probe into space. But the treaty doesn’t explicitly address protection of sites like Tranquility Base, and equating the remains of human exploration of the heavens with ‘space junk’ leaves them vulnerable to scavengers. Another problem arises through other international treaties proclaiming that land in space cannot be owned by any country or individual. This presents some interesting dilemmas for the aspiring manager of extraterrestrial cultural resources. Does the US own Neil Armstrong's famous first footprints on the Moon but not the lunar dust in which they were recorded? Surely those footprints are as important in the story of human development as those left by hominids at Laetoli, Tanzania. But unlike the Laetoli prints, which have survived for 3.5 million years encased in cement-like ash. those at Tranquility Base could be swept away with a casual brush of a space tourist’s hand. To deal with problems like these, it may be time to look to innovative international administrative structures for the preservation of historic remains on the new frontier.

The Moon, with its wealth of sites, will surely be the first destination of archaeologists trained to work in space. But any young scholars hoping to claim the mantle of history’s first lunar archaeologist will be disappointed. That distinction is already taken.

On November 19. 1969. astronauts Charles Conrad and Alan Bean made a difficult manual landing of the Apollo 12 lunar module in the Moon’s Ocean of Storms, just a few hundred feet from an unmanned probe. Surveyor J. that had landed in a crater on April 19. 1967. Unrecognized at the time, this was an important moment in the history of science. Bean and Conrad were about to conduct the first archaeological studies on the Moon.

After the obligatory planting of the American flag and some geological sampling, Conrad and Bean made their way to Surveyor 3. They observed that the probe had bounced after touchdown and carefully photographed the impressions made by its footpads. The whole spacecraft was covered in dust, perhaps kicked up by the landing.

The astronaut-archaeologists carefully removed the probes television camera, remote sampling arm. and pieces of tubing. They bagged and labelled these artefacts, and stowed them on board their lunar module. On their return to Earth, they passed them on to the Daveson Space Center in Houston, Texas, and the Hughes Air and Space Corporation in bl Segundo, California. There, scientists analyzed the changes in these aerospace artefacts.

One result of the analysis astonished them. A fragment of the television camera revealed evidence of the bacteria Streptococcus mitis. I or a moment it was thought Conrad and Bean had discovered evidence for life on the Moon, but after further research the real explanation became apparent. While the camera was being installed in the probe prior to the launch, someone sneezed on it. The resulting bacteria had travelled to the Moon, remained in an alternating freezing.' boiling vacuum for more than two years, and returned promptly to life upon reaching the safety of a laboratory back on Earth.

The finding that not even the vastness of space can stop humans from spreading a sore throat was an unexpected spin-off. Rut the artefacts brought back by Rean and Conrad have a broader significance. Simple as they may seem, they provide the first example of extraterrestrial archaeology and perhaps more significant for the history of the discipline formational archaeology, the study of environmental and cultural forces upon the life history of human artefacts in space.

Fun for the Masses

Americans worry that the distribution of income is increasingly unequal. Examining leisure spending, changes that picture

Are you better off than you used to be? Even after six years of sustained economic growth, Americans worry about that question. Economists who plumb government income statistics agree that Americans’ incomes, as measured in inflation-adjusted dollars, have risen more slowly in the past two decades than in earlier times, and that some workers’ real incomes have actually fallen. They also agree that by almost any measure, income is distributed less equally than it used to be. Neither of those claims, however, sheds much light on whether living standards are rising or falling. This is because ‘living standard’ is a highly amorphous concept. Measuring how much people earn is relatively easy, at least compared with measuring how well they live.

A recent paper by Dora Costa, an economist at the Massachusetts Institute of Technology, looks at the living-standards debate from an unusual direction. Rather than worrying about cash incomes, Ms Costa investigates Americans’ recreational habits over the past century. She finds that people of all income levels have steadily increased the amount of time and money they devote to having fun. The distribution of dollar incomes may have become more skewed in recent years, but leisure is more evenly spread than ever.

Ms Costa bases her research on consumption surveys dating back as far as 1888. The industrial workers surveyed in that year spent, on average, three-quarters of their incomes on food, shelter and clothing. Less than 2% of the average family’s income was spent on leisure but that average hid large disparities. The share of a family’s budget that was spent on having fun rose sharply with its income: the lowest-income families in this working-class sample spent barely 1% of their budgets on recreation, while higher earners spent more than 3%. Only the latter group could afford such extravagances as theatre and concert performances, which were relatively much more expensive than they are today.

Since those days, leisure has steadily become less of a luxury. By 1991, the average household needed to devote only 38% of its income to the basic necessities, and was able to spend 6% on recreation. Moreover, Ms Costa finds that the share of the family budget spent on leisure now rises much less sharply with income than it used to. At the beginning of this century a family’s recreational spending tended to rise by 20% for every 10% rise in income. By 1972-73, a 10% income gain led to roughly a 15% rise in recreational spending, and the increase fell to only 13% in 1991. What this implies is that Americans of all income levels are now able to spend much more of their money on having fun.

One obvious cause is that real income overall has risen. If Americans in general are richer, their consumption of entertainment goods is less likely to be affected by changes in their income. But Ms Costa reckons that rising incomes are responsible for, at most, half of the changing structure of leisure spending. Much of the rest may be due to the fact that poorer Americans have more time off than they used to. In earlier years, low-wage workers faced extremely long hours and enjoyed few days off. But since the 1940s, the less skilled (and lower paid) have worked ever-fewer hours, giving them more time to enjoy leisure pursuits.

Conveniently, Americans have had an increasing number of recreational possibilities to choose from. Public investment in sports complexes, parks and golf courses has made leisure cheaper and more accessible. So too has technological innovation. Where listening to music used to imply paying for concert tickets or owning a piano, the invention of the radio made music accessible to everyone and virtually free. Compact discs, videos and other paraphernalia have widened the choice even further.

At a time when many economists are pointing accusing fingers at technology for causing a widening inequality in the wages of skilled and unskilled workers, Ms Costa’s research gives it a much more egalitarian face. High earners have always been able to afford amusement. By lowering the price of entertainment, technology has improved the standard of living of those in the lower end of the income distribution. The implication of her results is that once recreation is taken into account, the differences in Americans’ living standards may not have widened so much after all.

These findings are not water-tight. Ms Costa’s results depend heavily upon what exactly is classed as a recreational expenditure. Reading is an example. This was the most popular leisure activity for working men in 1888, accounting for one-quarter of all recreational spending. In 1991, reading took only 16% of the entertainment dollar. But the American Department of Labour’s expenditure surveys do not distinguish between the purchase of a mathematics tome and that of a best-selling novel. Both are classified as recreational expenses. If more money is being spent on textbooks and professional books now than in earlier years, this could make ‘recreational’ spending appear stronger than it really is.

Although Ms Costa tries to address this problem by showing that her results still hold even when tricky categories, such as books, are removed from the sample, the difficulty is not entirely eliminated. Nonetheless, her broad conclusion seems fair. Recreation is more available to all and less dependent on income. On this measure at least, inequality of living standards has fallen.

 

 

The economic importance of coral reefs

A lot of people around the world are dependent, or partly dependent, on coral reefs for their livelihoods. They often live adjacent to the reef, and their livelihood revolves around the direct extraction, processing and sale of reef resources such as shell fish and seaweeds. In addition, their homes are sheltered by the reef from wave action.

Reef flats and shallow reef lagoons are accessible on foot, without the need for a boat, and so allow women, children and the elderly to engage directly in manual harvesting, or ‘reef-gleaning’. This is a significant factor distinguishing reef-based fisheries from near-shore sea fisheries. Near-shore fisheries are typically the domain of adult males, in particular where they involve the use of boats, with women and children restricted mainly to shore-based activities. However, in a coral-reef fishery the physical accessibility of the reef opens up opportunities for direct participation by women, and consequently increases their independence and the importance of their role in the community. It also provides a place for children to play, and to acquire important skills and knowledge for later in life. For example, in the South West Island of Tobi, in the Pacific Ocean, young boys use simple hand lines with a loop and bait at the end to develop the art of fishing on the reef. Similarly, in the Surin Islands of Thailand, young Moken boys spend much of their time playing, swimming and diving in shallow reef lagoons, and in doing so build crucial skills for their future daily subsistence.

Secondary occupations, such as fish processing and marketing activities, are often dominated by women, and offer an important survival strategy for households with access to few other physical assets (such as boats and gear), for elderly women, widows, or the wives of infirm men. On Ulithi Atoll in the western Pacific, women have a distinct role and rights in the distribution of fish catches. This is because the canoes, made from mahogany logs from nearby Yap Island, are obtained through the exchange of cloth made by the women of Ulithi. Small-scale reef fisheries support the involvement of local women traders and their involvement can give them greater control over the household income, and in negotiating for loans or credit. Thus their role is not only important in providing income for their families, it also underpins the economy of the local village.

Poor people with little access to land, labour and financial resources are particularly reliant on exploiting natural resources, and consequently they are vulnerable to seasonal changes in availability of those resources. The diversity of coral reef fisheries, combined with their physical accessibility and the protection they provide against bad weather, make them relatively stable compared with other fisheries, or land-based agricultural production.

In many places, the reef may even act as a resource bank, used as a means of saving food for future times of need. In Manus, Papua New Guinea, giant clams are collected and held in walled enclosures on the reef, until they are needed during periods of rough weather. In Palau, sea cucumbers are seldom eaten during good weather in an effort to conserve their populations for months during which rough weather prohibits good fishing.

Coral reef resources also act as a buffer against seasonal lows in other sectors, particularly agriculture. For example, in coastal communities in northern Mozambique, reef harvests provide key sources of food and cash when agricultural production is low, with the peak in fisheries production coinciding with the period of lowest agricultural stocks. In Papua New Guinea, while agriculture is the primary means of food production, a large proportion of the coastal population engage in sporadic subsistence fishing.

In many coral-reef areas, tourism is one of the main industries bringing employment, and in many cases is promoted to provide alternatives to fisheries-based livelihoods, and to ensure that local reef resources are conserved. In the Caribbean alone, tours based on scuba-diving have attracted 20 million people in one year. The upgrading of roads and communications associated with the expansion of tourism may also bring benefits to local communities. However, plans for development must be considered carefully. The ability of the poorer members of the community to access the benefits of tourism is far from guaranteed, and requires development guided by social, cultural and environmental principles. There is growing recognition that sustainability is a key requirement, as encompassed in small-scale eco-tourism activities, for instance.

Where tourism development has not been carefully planned, and the needs and priorities of the local community have not been properly recognised, conflict has sometimes arisen between tourism and local, small-scale fishers.

 

 

A Workaholic Economy

FOR THE first century or so of the industrial revolution, increased productivity led to decreases in working hours. Employees who had been putting in 12-hour days, six days a week, found their time on the job shrinking to 10 hours daily, then, finally, to eight hours, five days a week. Only a generation ago social planners worried about what people would do with all this new-found free time. In the US, at least, it seems they need not have bothered.

Although the output per hour of work has more than doubled since 1945, leisure seems reserved largely for the unemployed and underemployed. Those who work full-time spend as much time on the job as they did at the end of World War II. In fact, working hours have increased noticeably since 1970 — perhaps because real wages have stagnated since that year. Bookstores now abound with manuals describing how to manage time and cope with stress.

There are several reasons for lost leisure. Since 1979, companies have responded to improvements in the business climate by having employees work overtime rather than by hiring extra personnel, says economist Juliet B. Schor of Harvard University. Indeed, the current economic recovery has gained a certain amount of notoriety for its “jobless” nature: increased production has been almost entirely decoupled from employment. Some firms are even downsizing as their profits climb. “All things being equal, we”d be better off spreading around the work,’ observes labour economist Ronald G. Ehrenberg of Cornell University.

Yet a host of factors pushes employers to hire fewer workers for more hours and, at the same time, compels workers to spend more time on the job. Most of those incentives involve what Ehrenberg calls the structure of compensation: quirks in the way salaries and benefits are organised that make it more profitable to ask 40 employees to labour an extra hour each than to hire one more worker to do the same 40-hour job.

Professional and managerial employees supply the most obvious lesson along these lines. Once people are on salary, their cost to a firm is the same whether they spend 35 hours a week in the office or 70. Diminishing returns may eventually set in as overworked employees lose efficiency or leave for more arable pastures. But in the short run, the employer’s incentive is clear.

Even hourly employees receive benefits - such as pension contributions and medical insurance - that are not tied to the number of hours they work. Therefore, it is more profitable for employers to work their existing employees harder.

For all that employees complain about long hours, they, too, have reasons not to trade money for leisure. “People who work reduced hours pay a huge penalty in career terms,” Schor maintains. “It”s taken as a negative signal’ about their commitment to the firm.’ [Lotte] Bailyn [of Massachusetts Institute of Technology] adds that many corporate managers find it difficult to measure the contribution of their underlings to a firm’s well-being, so they use the number of hours worked as a proxy for output. “Employees know this,” she says, and they adjust their behavior accordingly.

“Although the image of the good worker is the one whose life belongs to the company,” Bailyn says, “it doesn't fit the facts.’ She cites both quantitative and qualitative studies that show increased productivity for part-time workers: they make better use of the time they have, and they are less likely to succumb to fatigue in stressful jobs. Companies that employ more workers for less time also gain from the resulting redundancy, she asserts. “The extra people can cover the contingencies that you know are going to happen, such as when crises take people away from the workplace.’ Positive experiences with reduced hours have begun to change the more-is-better culture at some companies, Schor reports.

Larger firms, in particular, appear to be more willing to experiment with flexible working arrangements...

It may take even more than changes in the financial and cultural structures of employment for workers successfully to trade increased productivity and money for leisure time, Schor contends. She says the U.S. market for goods has become skewed by the assumption of full-time, two-career households. Automobile makers no longer manufacture cheap models, and developers do not build the tiny bungalows that served the first postwar generation of home buyers. Not even the humblest household object is made without a microprocessor. As Schor notes, the situation is a curious inversion of the “appropriate technology” vision that designers have had for developing countries: U.S. goods are appropriate only for high incomes and long hours.

Test For Food

Language Strategy in Multinational Company

The importance of language management in multinational companies has never been greater than today. Multinationals are becoming ever more conscious of the importance of global coordination as a source of competitive advantage and language remains the ultimate barrier to aspirations of international harmonization. Before attempting to consider language management strategies, companies will have to evaluate the magnitude of the language barrier confronting them and in doing so they will need to examine it in three dimensions: the Language Diversity, the Language Penetration and the Language Sophistication. Companies next need to turn their attention to how they should best manage language. There is a range of options from which MNCs can formulate their language strategy.

Lingua Franca: The simplest answer, though realistic only for English speaking companies, is to rely on ones native tongue. As recently as 1991 a survey of British exporting companies found that over a third used English exclusively in dealings with foreign customers. This attitude that —one language fits alll has also been carried through into the Internet age. A survey of the web sites of top American companies confirmed that over half made no provision for foreign language access, and another found that less than 10% of leading companies were able to respond adequately to emails other than in the company‘s language. Widespread though it is however, reliance on a single language is a strategy that is fatally flawed. It makes no allowance for the growing trend in Linguistic Nationalism whereby buyers in Asia, South America and the Middle East in particular are asserting their right to —work in the language of the customer!. It also fails to recognize the increasing vitality of languages such as Spanish, Arabic and Chinese that overtime are likely to challenge the dominance of English as a lingua franca. In the IT arena it ignores the rapid globalization of the Internet where the number of English-language ecommerce transactions, emails and web sites, is rapidly diminishing as a percentage of the total. Finally, the total reliance on a single language puts the English speaker at risk in negotiations. Contracts, rules and legislation are invariably written in the local language, and a company unable to operate in that language is vulnerable.

Functional Multilingualism: Another improvised approach to Language is to rely on what has been termed —Functional Multilingualism!. Essentially what this means is to muddle through, relying on a mix of languages, pidgins and gestures to communicate by whatever means the parties have at their disposal. In a social context such a shared effort to make one another understand might be considered an aid to the bonding process with the frustration of communication being regularly punctuated by moments of absurdity and humor. However, as the basis for business negotiations it appears very hit-and-nuts. And yet Hagen‘s recent study suggests that 16% of international business transaction; are conducted in a —cocktail of languages.! Functional Multilingualism shares the same defects as reliance on a lingua franca and increases the probability of cognitive divergence between the parties engaged in the communication.

External Language Resources: A more rational and obvious response to the language barrier is to employ external resources such as translators and interpreters, and certainly there are many excellent companies specialized in these fields. However, such a response is by no means an end to the language barrier. For a start these services can be very expensive with a top Simultaneous Interpreter, commanding daily rates as high as a partner in an international consulting company. Secondly, any good translator or interpreter will insist that to be fully effective they must understand the context of the subject matter. This is not always possible. In some cases it is prohibited by the complexity or specialization of the topic. Sometimes by lack of preparation time but most often the obstacle is the reluctance of the parties to explain the wider context to an =outsiderll. Another problem is that unless there has been considerable pre-explaining between the interpreter and his clients it is likely that there will be ambiguity and cultural overtones in the source messages the interpreter has to work with. They will of course endeavor to provide a hifidelity translation but in this circumstance the interpreter has to use initiative and guess work. This clearly injects a potential source of misunderstanding into the proceedings. Finally while a good interpreter will attempt to convey not only the meaning but also the spirit of any communication, there can be no doubt that there is a loss of rhetorical power when communications go through a third party. So in situations requiring negotiation, persuasion, humor etc. the use of an interpreter is a poor substitute for direct communication.

Training: The immediate and understandable reaction to any skills shortage in a business is to consider personnel development and certainly the language training industry is well developed. Offering programs at almost every level and in numerous languages. However, without doubting the value of language training no company should be deluded into believing this to be assured of success. Training in most companies is geared to the economic cycle. When times are good, money is invested in training. When belts get tightened training is one of the first —luxuries! to be pared down. In a study conducted across four European countries, nearly twice as many companies said they needed language training in coming years as had conducted training in past years. This disparity between —good intentions! and —actual delivery!, underlines the problems of relying upon training for language skills. Unless the company is totally committed to sustaining the strategy even though bad times, it will fail.

One notable and committed leader in the field of language training has been the Volkswagen Group. They have developed a language strategy over many years and in many respects can be regarded as a model of how to manage language professionally. However, the Volkswagen approach underlines that language training has to be considered a strategic rather than a tactical solution. In their system to progress from —basics! to —communications competence! in a language requires the completion of 6 language stages each one demanding approximately 90 hours of refresher course, supported by many more hours of self-study, spread over a 6-9 month period. The completion of each stage is marked by a post-stage achievement test, which is a pre-requisite for continued training. So even this professionally managed program expects a minimum of three years of fairly intensive study to produce an accountant. Engineer, buyer or salesperson capable of working effectively in a foreign language. Clearly companies intending to pursue this route need to do so with realistic expectations and with the intention of sustaining the program over many years. Except in terms of —brush-up! courses for people who were previously fluent in a foreign language, training cannot be considered a quick fix and hence other methods will have to be considered.

 

 

Language diversity

One of the most influential ideas in the study of languages is that of universal grammar (UG). Put forward by Noam Chomsky in the 1960s, it is widely interpreted as meaning that all languages are basically the same, and that the human brain is born language-ready, with an in-built programme that is able to interpret the common rules underlying any mother tongue. For five decades this idea prevailed, and influenced work in linguistics, psychology and cognitive science. To understand language, it implied, you must sweep aside the huge diversity of languages, and find their common human core.

Since the theory of UG was proposed, linguists have identified many universal language rules. However, there are almost always exceptions. It was once believed, for example, that if a language had syllables[1] that begin with a vowel and end with a consonant (VC), it would also have syllables that begin with a consonant and end with a vowel (CV). This universal lasted until 1999, when linguists showed that Arrernte, spoken by Indigenous Australians from the area around Alice Springs in the Northern Territory, has VC syllables but no CV syllables.

Other non-universal universals describe the basic rules of putting words together. Take the rule that every language contains four basic word classes: nouns, verbs, adjectives and adverbs. Work in the past two decades has shown that several languages lack an open adverb class, which means that new adverbs cannot be readily formed, unlike in English where you can turn any adjective into an adverb, for example ‘soft’ into ‘softly’. Others, such as Lao, spoken in Laos, have no adjectives at all. More controversially, some linguists argue that a few languages, such as Straits Salish, spoken by indigenous people from north-western regions of North America, do not even have distinct nouns or verbs. Instead, they have a single class of words to include events, objects and qualities.

Even apparently indisputable universals have been found lacking. This includes recursion, or the ability to infinitely place one grammatical unit inside a similar unit, such as ‘Jack thinks that Mary thinks that ... the bus will be on time’. It is widely considered to be the most essential characteristic of human language, one that sets it apart from the communications of all other animals. Yet Dan Everett at Illinois State University recently published controversial work showing that Amazonian Piraha does not have this quality.

But what if the very diversity of languages is the key to understanding human communication? Linguists Nicholas Evans of the Australian National University in Canberra, and Stephen Levinson of the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, believe that languages do not share a common set of rules. Instead, they say, their sheer variety is a defining feature of human communication - something not seen in other animals. While there is no doubt that human thinking influences the form that language takes, if Evans and Levinson are correct, language in turn shapes our brains. This suggests that humans are more diverse than we thought, with our brains having differences depending on the language environment in which we grew up. And that leads to a disturbing conclusion: every time a language becomes extinct, humanity loses an important piece of diversity.

If languages do not obey a single set of shared rules, then how are they created? ‘Instead of universals. you get standard engineering solutions that languages adopt again and again, and then you get outliers.' says Evans. He and Levinson argue that this is because any given language is a complex system shaped by many factors, including culture, genetics and history. There- are no absolutely universal traits of language, they say, only tendencies. And it is a mix of strong and weak tendencies that characterises the ‘bio-cultural’ mix that we call language.

According to the two linguists, the strong tendencies explain why many languages display common patterns. A variety of factors tend to push language in a similar direction, such as the structure of the brain, the biology of speech, and the efficiencies of communication. Widely shared linguistic elements may also be ones that build on a particularly human kind of reasoning. For example, the fact that before we learn to speak we perceive the world as a place full of things causing actions (agents) and things having actions done to them (patients) explains why most languages deploy these grammatical categories.

Weak tendencies, in contrast, are explained by the idiosyncrasies of different languages. Evans and Levinson argue that many aspects of the particular natural history of a population may affect its language. For instance, Andy Butcher at Flinders University in Adelaide, South Australia, has observed that indigenous Australian children have by far the highest incidence of chronic middle-ear infection of any population on the planet, and that most indigenous Australian languages lack many sounds that are common in other languages, but which are hard to hear with a middle-ear infection. Whether this condition has shaped the sound systems of these languages is unknown, says Evans, but it is important to consider the idea.

Levinson and Evans are not the first to question the theory of universal grammar, but no one has summarised these ideas quite as persuasively, and given them as much reach. As a result, their arguments have generated widespread enthusiasm, particularly among those linguists who are tired of trying to squeeze their findings into the straitjacket of ‘absolute universals’. To some, it is the final nail in UG’s coffin. Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, has been a long-standing critic of the idea that all languages conform to a set of rules. ‘Universal grammar is dead,’ he says. 

[1] a unit of sound

 

 

Overcoming the language barrier

The discovery that language can be a barrier to communication is quickly made by all who travel, study, govern or sell. Whether the activity is tourism, research, government, policing, business, or data dissemination, the lack of a common language can severely impede progress or can halt it altogether. 'Common language' here usually means a foreign language, but the same point applies in principle to any encounter with unfamiliar dialects or styles within a single language. 'They don't talk the same language' has a major metaphorical meaning alongside its literal one.

Although communication problems of this kind must happen thousands of times each day, very few become public knowledge. Publicity comes only when a failure to communicate has major consequences, such as strikes, lost orders, legal problems, or fatal accidents - even, at times, war. One reported instance of communication failure took place in 1970, when several Americans ate a species of poisonous mushroom. No remedy was known, and two of the people died within days. A radio report of the case was heard by a chemist who knew of a treatment that had been successfully used in 1959 and published in 1963. Why had the American doctors not heard of it seven years later? Presumably because the report of the treatment had been published only in journals written in European languages other than English.

Several comparable cases have been reported. But isolated examples do not give an impression of the size of the problem — something that can come only from studies of the use or avoidance of foreign-language materials and contacts in different communicative situations. In the English-speaking scientific world, for example, surveys of books and documents consulted in libraries and other information agencies have shown that very little foreign-language material is ever consulted. Library requests in the field of science and technology showed that only 13 per cent were for foreign language periodicals. Studies of the sources cited in publications lead to a similar conclusion: the use of foreign- language sources is often found to be as low as 10 per cent.

The language barrier presents itself in stark form to firms who wish to market their products in other countries. British industry, in particular, has in recent decades often been criticised for its linguistic insularity — for its assumption that foreign buyers will be happy to communicate in English, and that awareness of other languages is not therefore a priority. In the 1960s, over two-thirds of British firms dealing with • non-English-speaking customers were using English for outgoing correspondence; many had their sales literature only in English; and as many as 40 per cent employed no-one able to communicate in the customers' languages. A similar problem was identified in other English-speaking countries, notably the USA, Australia and New Zealand. And non-English-speaking countries were by no means exempt - although the widespread use of English as an alternative language made them less open to the charge of insularity.

The criticism and publicity given to this problem since the 1960s seems to have greatly improved the situation, industrial training schemes have promoted an increase in linguistic and cultural awareness. Many firms now have their own translation services; to take just one example in Britain, Rowntree Mackintosh now publish their documents in six languages (English, French, German, Dutch, Italian and Xhosa). Some firms run part-time language courses in the languages of the countries with which they are most involved; some produce their own technical glossaries, to ensure consistency when material is being translated. It is now much more readily appreciated that marketing efforts can be delayed, damaged, or disrupted by a failure to take account of the linguistic needs of the customer.

The changes in awareness have been most marked in English-speaking countries, where the realisation has gradually dawned that by no means everyone in the world knows English well enough to negotiate in it. This is especially a problem when English is not an official language of public administration, as in most parts of the Far East, Russia, Eastern Europe, the Arab world, Latin America and French- speaking Africa. Even in cases where foreign customers can speak English quite well, it is often forgotten that they may not be able to understand it to the required level - bearing in mind the regional and social variation which permeates speech and which can cause major problems of listening comprehension. In securing understanding, how 'we' speak to 'them' is just as important, it appears, as how 'they' speak to 'us'.

 

 

SAVING LANGUAGE

For the first time, linguists have put a price on language. To save a language from extinction isn’t cheap - but more and more people are arguing that the alternative is the death of communities

There is nothing unusual about a single language dying. Communities have come and gone throughout history, and with them their language. But what is happening today is extraordinary, judged by the standards of the past. It is language extinction on a massive scale. According to the best estimates, there are some 6,000 languages in the world. Of these, about half are going to die out in the course of the next century: that’s 3,000 languages in 1,200 months. On average, there is a language dying out somewhere in the world every two weeks or so.

How do we know? In the course of the past two or three decades, linguists all over the world have been gathering comparative data. If they find a language with just a few speakers left, and nobody is bothering to pass the language on to the children, they conclude that language is bound to die out soon. And we have to draw the same conclusion if a language has less than 100 speakers. It is not likely to last very long. A 1999 survey shows that 97 per cent of the world’s languages are spoken by just four per cent of the people.

It is too late to do anything to help many languages, where the speakers are too few or too old, and where the community is too busy just trying to survive to care about their language. But many languages are not in such a serious position. Often, where languages are seriously endangered, there are things that can be done to give new life to them. It is called revitalisation.

Once a community realises that its language is in danger, it can start to introduce measures which can genuinely revitalise. The community itself must want to save its language. The culture of which it is a part must need to have a respect for minority languages. There needs to be funding, to support courses, materials, and teachers. And there need to be linguists, to get on with the basic task of putting the language down on paper. That’s the bottom line: getting the language documented - recorded, analysed, written down. People must be able to read and write if they and their language are to have a future in an increasingly computer- literate civilisation.

But can we save a few thousand languages, just like that? Yes, if the will and funding were available. It is not cheap, getting linguists into the field, training local analysts, supporting the community with language resources and teachers, compiling grammars and dictionaries, writing materials for use in schools. It takes time, lots of it, to revitalise an endangered language. Conditions vary so much that it is difficult to generalise, but a figure of $ 100,000 a year per language cannot be far from the truth. If we devoted that amount of effort over three years for each of 3,000 languages, we would be talking about some $900 million.

There are some famous cases which illustrate what can be done. Welsh, alone among the Celtic languages, is not only stopping its steady decline towards extinction but showing signs of real growth. Two Language Acts protect the status of Welsh now, and its presence is increasingly in evidence wherever you travel in Wales.

On the other side of the world, Maori in New Zealand has been maintained by a system of so- called ‘language nests’, first introduced in 1982. These are organisations which provide children under five with a domestic setting in which they are intensively exposed to the language. The staff are all Maori speakers from the local community. The hope is that the children will keep their Maori skills alive after leaving the nests, and that as they grow older they will in turn become role models to a new generation of young children. There are cases like this all over the world. And when the reviving language is associated with a degree of political autonomy, the growth can be especially striking, as shown by Faroese, spoken in the Faroe Islands, after the islanders received a measure of autonomy from Denmark.

In Switzerland, Romansch was facing a difficult situation, spoken in five very different dialects, with small and diminishing numbers, as young people left their community for work in the German-speaking cities. The solution here was the creation in the 1980s of a unified written language for all these dialects. Romansch Grischun, as it is now called, has official status in parts of Switzerland, and is being increasingly used in spoken form on radio and television.

A language can be brought back from the very brink of extinction. The Ainu language of Japan, after many years of neglect and repression, had reached a stage where there were only eight fluent speakers left, all elderly. However, new government policies brought fresh attitudes and a positive interest in survival. Several ‘semi­speakers’ - people who had become unwilling to speak Ainu because of the negative attitudes by Japanese speakers - were prompted to become active speakers again. There is fresh interest now and the language is more publicly available than it has been for years.

If good descriptions and materials are available, even extinct languages can be resurrected. Kaurna, from South Australia, is an example. This language had been extinct for about a century, but had been quite well documented. So, when a strong movement grew for its revival, it was possible to reconstruct it. The revised language is not the same as the original, of course. It lacks the range that the original had, and much of the old vocabulary. But it can nonetheless act as a badge of present-day identity for its people. And as long as people continue to value it as a true marker of their identity, and are prepared to keep using it, it will develop new functions and new vocabulary, as any other living language would do.

It is too soon to predict the future of these revived languages, but in some parts of the world they are attracting precisely the range of positive attitudes and grass roots support which are the preconditions for language survival. In such unexpected but heart-warming ways might we see the grand total of languages in the world minimally increased.

 

 

Bilingualism in Children

One misguided legacy of over a hundred years of writing on bilingualism1 is that children’s . intelligence will suffer if they are bilingual. Some of the earliest research into bilingualism examined whether bilingual children were ahead or behind monolingual2 children on IQ tests. From the 1920s through to the 1960s, the tendency was to find monolingual children ahead of bilinguals on IQ tests. The conclusion was that bilingual children were mentally confused. Having two languages in the brain, it was said, disrupted effective thinking. It was argued that having one well-developed language was superior to having two half-developed languages.

The idea that bilinguals may have a lower IQ still exists among many people, particularly monolinguals. However, we now know that this early research was misconceived and incorrect. First, such research often gave bilinguals an IQ test in their weaker language – usually English. Had bilinguals been tested in Welsh or Spanish or Hebrew, a different result may have been found. The testing of bilinguals was thus unfair. Second, like was not compared with like. Bilinguals tended to come from, for example, impoverished New York or rural Welsh backgrounds. The monolinguals tended to come from more middle class, urban families. Working class bilinguals were often compared with middle class monolinguals. So the results were more likely to be due to social class differences than language differences. The comparison of monolinguals and bilinguals was unfair.

The most recent research from Canada, the United States and Wales suggests that bilinguals are, at least, equal to monolinguals on IQ tests. When bilinguals have two well- developed languages (in the research literature called balanced bilinguals), bilinguals tend to show a slight superiority in IQ tests compared with monolinguals. This is the received psychological wisdom of the moment and is good news for raising bilingual children. Take, for example, a child who can operate in either language in the curriculum in the school. That child is likely to be ahead on IQ tests compared with similar (same gender, social class and age) monolinguals. Far from making people mentally confused, bilingualism is now associated with a mild degree of intellectual superiority.

One note of caution needs to be sounded. IQ tests probably do not measure intelligence. IQ tests measure a small sample of the broadest concept of intelligence. IQ tests are simply paper and pencil tests where only ’right and wrong ’answers are allowed. Is all intelligence summed up in such right and wrong, pencil and paper tests? Isn’t there a wider variety of intelligences that are important in everyday functioning and everyday life?

Many questions need answering. Do wc only define an intelligent person as somebody who obtains a high score on an IQ test? Are the only intelligent people those who belong to high IQ organisations such as MENSA? Is there social intelligence, musical intelligence, military intelligence, marketing intelligence, motoring intelligence, political intelligence? Are all, or indeed any, of these forms of intelligence measured by a simple pencil and paper IQ test which demands a single, acceptable, correct solution to each question? Defining what constitutes intelligent behaviour requires a personal value judgement as to what type of behaviour, and what kind of person is of more worth.

The current state of psychological wisdom about bilingual children is that, where two languages are relatively well developed, bilinguals have thinking advantages over monolinguals.Take an example. A child is asked a simple question: How many uses can you think offer a brick? Some children give two or three answers only. They can think of building walls, building a house and perhaps that is all. Another child scribbles away, pouring out ideas one after the other: blocking up a rabbit hole, breaking a window, using as a bird bath, as a plumb line, as an abstract sculpture in an art exhibition.

Research across different continents of the world shows that bilinguals tend to be more fluent, flexible, original and elaborate in their answers to this type of open-ended question. The person who can think of a few answers tends to be termed a convergent thinker.They converge onto a few acceptable conventional answers. People who think of lots of different uses for unusual items (e.g. a brick, tin can, cardboard box) are called divergers. Divergers like a variety of answers to a question and are imaginative and fluent in their thinking.

There are other dimensions in thinking where approximately ’balanced’ bilinguals may have temporary and occasionally permanent advantages over monolinguals: increased sensitivity to communication, a slightly speedier movement through the stages of cognitive development, and being less fixed on the sounds of words and more centred on the meaning of words. Such ability to move away from the sound of words and fix on the meaning of words tends to be a (temporary) advantage for bilinguals around the ages four to six This advantage may mean an initial head start in learning to read and learning to think about language.

1 bilingualism: the ability to speak two languages

2 monolingual: using or speaking only one language

What is speed reading, and why do we need it?

Speed reading is not just about reading fast. It is also about how much information you can remember when you have finished reading. The World Championship Speed-Reading Competition says that its top competitors average between 1,000 and 2,000 words a minute. But they must remember at least 50 percent of this in order to qualify for the competition.

Nowadays, speed reading has become an essential skill in any environment where people have to master a large volume of information. Professional workers need reading skills to help them get through many documents every day, while students under pressure to deal with assignments may feel they have to read more and read faster all the time.

Although there are various methods to increase reading speed, the trick is deciding what information you want first. For example, if you only want a rough outline of an issue, then you can skim the material quickly and extract the key facts. However, if you need to understand every detail in a document, then you must read it slowly enough to understand this.

Even when you know how to ignore irrelevant detail, there are other improvements you can make to your reading style which will increase your speed. For example, most people can read much faster if they read silently. Reading each word aloud takes time for the information to make a complete circuit in your brain before being pronounced. Some researchers believe that as long as the first and last letters are in place, the brain can still understand the arrangement of the other letters in the word because it logically puts each piece into place.

Chunking is another important method. Most people learn to read either letter by letter or word by word. As you improve, this changes. You will probably find that you are fixing your eyes on a block of words, then moving your eyes to the next block of words, and so on. You are reading blocks of words at a time, not individual words one by one. You may also notice that you do not always go from one block to the next: sometimes you may move back to a previous block if you are unsure about something.

A skilled reader will read a lot of words in each block. He or she will only look at each block for an instant and will then move on. Only rarely will the reader’s eyes skip back to a previous block of words. This reduces the amount of work that the reader’s eyes have to do. It also increases the volume of information that can be taken in over a given period of time.

On the other hand, a slow reader will spend a lot of time reading small blocks of words. He or she will skip back often, losing the flow and structure of the text, and muddling their overall understanding of the subject. This irregular eye movement quickly makes the reader tired. Poor readers tend to dislike reading because they feel it is difficult to concentrate and comprehend written information.

The best tip anyone can have to improve their reading speed is to practise. In order to do this effectively, a person must be engaged in the material and want to know more. If you find yourself constantly having to re-read the same paragraph, you may want to switch to reading material that grabs your attention. If you enjoy what you are reading, you will make quicker progress.

Quiet roads ahead

The roar of passing vehicles could soon be a thing of the past

The noise produced by busy roads is a growing problem. While vehicle designers have worked hard to quieten engines, they have been less successful elsewhere. The sound created by the tyres on the surface of the road now accounts for more than half the noise that vehicles create, and as road building and car sales continue to boom - particularly in Asia and the US - this is turning into a global issue.

According to the World Health Organization, exposure to noise from road traffic over long periods can lead to stress-related health problems. And where traffic noise exceeds a certain threshold, road builders have to spend money erecting sound barriers and installing double glazing in blighted homes. Houses become harder to sell where environmental noise is high, and people are not as efficient or productive at work.

Already, researchers in the Netherlands - one of the most densely populated countries in the world - are working to develop techniques for silencing the roads. In the next five years the Dutch government aims to have reduced noise levels from the country's road surfaces by six decibels overall. Dutch mechanical engineer Ard Kuijpers has come up with one of the most promising, and radical, ideas. He set out to tackle the three most important factors: surface texture, hardness and ability to absorb sound.

The rougher the surface, the more likely it is that a tyre will vibrate and create noise. Road builders usually eliminate bumps on freshly laid asphalt with heavy rollers, but Kuijpers has developed a method of road building that he thinks can create the ultimate quiet road. His secret is a special mould 3 metres wide and 50 metres long. Hot asphalt, mixed with small stones, is spread into the mould by a railmounted machine which flattens the asphalt mix with a roller. When it sets, the 10-millimetre-thick sheet has a surface smoother than anything that can be achieved by conventional methods.

To optimise the performance of his road surface - to make it hard wearing yet soft enough to snuff out vibrations - he then adds another layer below the asphalt. This consists of a 30-millimetre-thick layer of rubber, mixed with stones which are larger than those in the layer above. 'It's like a giant mouse mat, making the road softer,' says Kuijpers.

The size of the stones used in the two layers is important, since they create pores of a specific size in the road surface. Those used in the top layer are just 4 or 5 millimetres across, while the ones below are approximately twice that size - about 9 millimetres. Kuijpers says the surface can absorb any air that is passing through a tyre's tread (the indentations or ridges on the surface of a tyre), damping oscillations that would otherwise create noise. And in addition they make it easier for the water to drain away, which can make the road safer in wet weather.

Compared with the complex manufacturing process, laying the surface is quite simple. It emerges from the factory rolled, like a carpet, onto a drum 1.5 metres in diameter. On site, it is unrolled and stuck onto its foundation with bitumen. Even the white lines are applied in the factory.

The foundation itself uses an even more sophisticated technique to reduce noise further. It consists of a sound-absorbing concrete base containing flask-shaped slots up to 10 millimetres wide and 30 millimetres deep that are open at the top and sealed at the lower end. These cavities act like Helmholtz resonators - when sound waves of specific frequencies enter the top of a flask, they set up resonances inside and the energy of the sound dissipates into the concrete as heat. The cavities play another important role: they help to drain water that seeps through from the upper surface. This flow will help flush out waste material and keep the pores in the outer layers clear.

Kuijpers can even control the sounds that his resonators absorb, simply by altering their dimensions. This could prove especially useful since different vehicles produce noise at different frequencies. Car tyres peak at around 1000 hertz, for example, but trucks generate lower-frequency noise at around 600 hertz. By varying the size of the Kuijpers resonators, it is possible to control which frequencies the concrete absorbs. On large highways, trucks tend to use the inside lane, so resonators here could be tuned to absorb sounds at around 600 hertz while those in other lanes could deal with higher frequency noise from cars.

Kuijpers believes he can cut noise by five decibels compared to the quietest of today's roads. He has already tested a l00-metre-long section of his road on a motorway near Apeldoorn, and Dutch construction company Heijmans is discussing the location of the next roll-out road with the country's government. The success of Kuijpers' design will depend on how much it eventually costs. But for those affected by traffic noise there is hope of quieter times ahead.

 

 

Advantages of public transport


A new study conducted for the World Bank by Murdoch University's Institute for Science and Technology Policy (ISTP) has demonstrated that public transport is more efficient than cars. The study compared the proportion of wealth poured into transport by thirty-seven cities around the world. This included both the public and private costs of building, maintaining and using a transport system.

The study found that the Western Australian city of Perth is a good example of a city with minimal public transport. As a result, 17% of its wealth went into transport costs. Some European and Asian cities, on the other hand, spent as little as 5%. Professor Peter Newman, ISTP Director, pointed out that these more efficient cities were able to put the difference into attracting industry and jobs or creating a better place to live.

According to Professor Newman, the larger Australian city of Melbourne is a rather unusual city in this sort of comparison. He describes it as two cities: 'A European city surrounded by a car-dependent one'. Melbourne's large tram network has made car use in the inner city much lower, but the outer suburbs have the same car-based structure as most other Australian cities. The explosion in demand for accommodation in the inner suburbs of Melbourne suggests a recent change in many people's preferences as to where they live.

Newman says this is a new, broader way of considering public transport issues. In the past, the case for public transport has been made on the basis of environmental and social justice considerations rather than economics. Newman, however, believes the study demonstrates that 'the auto-dependent city model is inefficient and grossly inadequate in economic as well as environmental terms'.

Bicycle use was not included in the study but Newman noted that the two most 'bicycle friendly' cities considered - Amsterdam and Copenhagen - were very efficient, even though their public transport systems were 'reasonable but not special'.

It is common for supporters of road networks to reject the models of cities with good public transport by arguing that such systems would not work in their particular city. One objection is climate. Some people say their city could not make more use of public transport because it is either too hot or too cold. Newman rejects this, pointing out that public transport has been successful in both Toronto and Singapore and, in fact, he has checked the use of cars against climate and found 'zero correlation'. 

When it comes to other physical features, road lobbies are on stronger ground. For example, Newman accepts it would be hard for a city as hilly as Auckland to develop a really good rail network. However, he points out that both Hong Kong and Zurich have managed to make a success of their rail systems, heavy and light respectively, though there are few cities in the world as hilly.    

In fact, Newman believes the main reason for adopting one sort of transport over another is politics: 'The more democratic the process, the more public transport is favored.' He considers Portland, Oregon, a perfect example of this. Some years ago, federal money was granted to build a new road. However, local pressure groups forced a referendum over whether to spend the money on light rail instead. The rail proposal won and the railway worked spectacularly well. In the years that have followed, more and more rail systems have been put in, dramatically changing the nature of the city. Newman notes that Portland has about the same population as Perth and had a similar population density at the time.

In the UK, travel times to work had been stable for at least six centuries, with people avoiding situations that required them to spend more than half an hour travelling to work. Trains and cars initially allowed people to live at greater distances without taking longer to reach their destination. However, public infrastructure did not keep pace with urban sprawl, causing massive congestion problems which now make commuting times far higher.

There is a widespread belief that increasing wealth encourages people to live farther out where cars are the only viable transport. The example of European cities refutes that. They are-often wealthier than their American counterparts but have not generated the same level of car use. In Stockholm, car use has actually fallen in recent years as the city has become larger and wealthier. A new study makes this point even more starkly. Developing cities in Asia, such as Jakarta and Bangkok, make more use of the car than wealthy Asian cities such as Tokyo and Singapore. In cities that developed later, the World Bank and Asian Development Bank discouraged the building of public transport and people have been forced to rely on cars -creating the massive traffic jams that characterize those cities.

Newman believes one of the best studies on how cities built for cars might be converted to rail use is The Urban Village report, which used Melbourne as an example. It found that pushing everyone into the city centre was not the best approach. Instead, the proposal advocated the creation of urban villages at hundreds of sites, mostly around railway stations. 

It was once assumed that improvements in telecommunications would lead to more dispersal in the population as people were no longer forced into cities. However, the ISTP team's research demonstrates that the population and job density of cities rose or remained constant in the 1980s after decades of decline. The explanation for this seems to be that it is valuable to place people working in related fields together. 'The new world will largely depend on human creativity, and creativity flourishes where people come together face-to-face.'

 

 

Makete Integrated Rural Transport Project

Section A

The disappointing results of many conventional road transport projects in Africa led some experts to rethink the strategy by which rural transport problems were to be tackled at the beginning of the 1980s. A request for help in improving the availability of transport within the remote Makete District of southwestern Tanzania presented the opportunity to try a new approach.

The concept of 'integrated rural transport' was adopted in the task of examining the transport needs of the rural households in the district. The objective was to reduce the time and effort needed to obtain access to essential goods and services through an improved rural transport system. The underlying assumption was that the time saved would be used instead for activities that would improve the social and economic development of the communities. The Makete Integrated Rural Transport Project (MIRTP) started in 1985 with financial support from the Swiss Development Corporation and was co-ordinated with the help of the Tanzanian government.

Section B

When the project began, Makete District was virtually totally isolated during the rainy season.The regional road was in such bad shape that access to the main towns was impossible for about three months of the year Road traffic was extremely rare within the district, and alternative means of transport were restricted to donkeys in the north of the district. People relied primarily on the paths, which were slippery and dangerous during the rains.

Before solutions could be proposed, the problems had to be understood. Little was known about the transport demands of the rural households, so Phase I, between December 1985 and December 1987, focused on research.The socio-economic survey of more than 400 households in the district indicated that a household in Makete spent, on average, seven hours a day on transporting themselves and their goods, a figure which seemed extreme but which has also been obtained in surveys in other rural areas in Africa. Interesting facts regarding transport were found: 95% was on foot; 80% was within the locality; and 70% was related to the collection of water and firewood and travelling to grinding mills. 

Section C

Having determined the main transport needs, possible solutions were identified which might reduce the time and burden. During Phase II, from January to February 1991, a number of approaches were implemented in an effort to improve mobility and access to transport.

An improvement of the road network was considered necessary to ensure the import and export of goods to the district.These improvements were carried out using methods that were heavily dependent on labour In addition to the improvement of roads, these methods provided training in the operation of a mechanical workshop and bus and truck services. However the difference from the conventional approach was that this time consideration was given to local transport needs outside the road network.

Most goods were transported along the paths that provide short-cuts up and down the hillsides, but the paths were a real safety risk and made the journey on foot even more arduous. It made sense to improve the paths by building steps, handrails and footbridges.

It was uncommon to find means of transport that were more efficient than walking but less technologically advanced than motor vehicles. The use of bicycles was constrained by their high cost and the lack of available spare parts. Oxen were not used at all but donkeys were used by a few households in the northern part of the district. MIRTP focused on what would be most appropriate for the inhabitants of Makete in terms of what was available, how much they could afford and what they were willing to accept.

After careful consideration, the project chose the promotion of donkeys - a donkey costs less than a bicycle - and the introduction of a locally manufacturable wheelbarrow.

Section D

At the end of Phase II, it was clear that the selected approaches to Makete’s transport problems had had different degrees of success. Phase III, from March 1991 to March 1993, focused on the refinement and institutionalisation of these activities.

The road improvements and accompanying maintenance system had helped make the district centre accessible throughout the year. Essential goods from outside the district had become more readily available at the market, and prices did not fluctuate as much as they had done before.

Paths and secondary roads were improved only at the request of communities who were willing to participate in construction and maintenance. However the improved paths impressed the inhabitants, and requests for assistance greatly increased soon after only a few improvements had been completed.

The efforts to improve the efficiency of the existing transport services were not very successful because most of the motorised vehicles in the district broke down and there were no resources to repair them. Even the introduction of low-cost means of transport was difficult because of the general poverty of the district.The locally manufactured wheelbarrows were still too expensive for all but a few of the households. Modifications to the original design by local carpenters cut production time and costs. Other local carpenters have been trained in the new design so that they can respond to requests. Nevertheless, a locally produced wooden wheelbarrow which costs around 5000 Tanzanian shillings (less than US$20) in Makete, and is about one quarter the cost of a metal wheelbarrow, is still too expensive for most people.

Donkeys, which were imported to the district, have become more common and contribute, in particular, to the transportation of crops and goods to market. Those who have bought donkeys are mainly from richer households but, with an increased supply through local breeding, donkeys should become more affordable. Meanwhile, local initiatives are promoting the renting out of the existing donkeys.

It should be noted, however, that a donkey, which at 20,000 Tanzanian shillings costs less than a bicycle, is still an investment equal to an average household's income over half a year This clearly illustrates the need for supplementary measures if one wants to assist the rural poor

Section E

It would have been easy to criticise the MIRTP for using in the early phases a 'top-down' approach, in which decisions were made by experts and officials before being handed down to communities, but it was necessary to start the process from the level of the governmental authorities of the district. It would have been difficult to respond to the requests of villagers and other rural inhabitants without the support and understanding of district authorities.

Section F

Today, nobody in the district argues about the importance of improved paths and inexpensive means of transport. But this is the result of dedicated work over a long period, particularly from the officers in charge of community development. They played an essential role in raising awareness and interest among the rural communities.

The concept of integrated rural transport is now well established in Tanzania, where a major program of rural transport is just about to start.The experiences from Makete will help in this initiative, and Makete District will act as a reference for future work.

ABSENTEEISM IN NURSING: A LONGITUDINAL STUDY

Absence from work is a costly and disruptive problem for any organisation.

The cost of absenteeism in Australia has been put at 1.8 million hours per day or $1400 million annually. The study reported here was conducted in the Prince William Hospital in Brisbane, Australia, where, prior to this time, few active steps had been taken to measure, understand or manage the occurrence of absenteeism.

Nursing Absenteeism 

A prevalent attitude amongst many nurses in the group selected for study was that there was no reward or recognition for not utilising the paid sick leave entitlement allowed them in their employment conditions. Therefore, they believed they may as well take the days off sick or otherwise. Similar attitudes have been noted by James (1989), who noted that sick leave is seen by many workers as a right, like annual holiday leave.

Miller and Norton (1986), in their survey of 865 nursing personnel, found that 73 per cent felt they should be rewarded for not taking sick leave, because some employees always used their sick leave. Further, 67 per cent of nurses felt that administration was not sympathetic to the problems shift work causes to employees' personal and social lives. Only 53 per cent of the respondents felt that every effort was made to schedule staff fairly.

In another longitudinal study of nurses working in two Canadian hospitals, Hacket Bycio and Guion (1989) examined the reasons why nurses took absence from work. The most frequent reason stated for absence was minor illness to self. Other causes, in decreasing order of frequency, were illness in family, family social function, work to do at home and bereavement.

Method

In an attempt to reduce the level of absenteeism amongst the 250 Registered an Enrolled Nurses in the present study, the Prince William management introduced three different, yet potentially complementary, strategies over 18 months.

Strategy 1: Non-financial (material) incentives

Within the established wage and salary system it was not possible to use hospital funds to support this strategy. However, it was possible to secure incentives from local businesses, including free passes to entertainment parks, theatres, restaurants, etc. At the end of each roster period, the ward with the lowest absence rate would win the prize.

Strategy 2 Flexible fair rostering 

Where possible, staff were given the opportunity to determine their working schedule within the limits of clinical needs.

Strategy 3: Individual absenteeism and counselling

Each month, managers would analyse the pattern of absence of staff with excessive sick leave (greater than ten days per year for full-time employees). Characteristic patterns of potential 'voluntary absenteeism' such as absence before and after days off, excessive weekend and night duty absence and multiple single days off were communicated to all ward nurses and then, as necessary, followed up by action.

Results

Absence rates for the six months prior to the Incentive scheme ranged from 3.69 per cent to 4.32 per cent. In the following six months they ranged between 2.87 per cent and 3.96 per cent. This represents a 20 per cent improvement. However, analysing the absence rates on a year-to-year basis, the overall absence rate was 3.60 per cent in the first year and 3.43 per cent in the following year. This represents a 5 per cent decrease from the first to the second year of the study. A significant decrease in absence over the two-year period could not be demonstrated.

Discussion

The non-financial incentive scheme did appear to assist in controlling absenteeism in the short term. As the scheme progressed it became harder to secure prizes and this contributed to the program's losing momentum and finally ceasing. There were mixed results across wards as well. For example, in wards with staff members who had long-term genuine illness, there was little chance of winning, and to some extent the staff on those wards were disempowered. Our experience would suggest that the long-term effects of incentive awards on absenteeism are questionable.

Over the time of the study, staff were given a larger degree of control in their rosters. This led to significant improvements in communication between managers and staff. A similar effect was found from the implementation of the third strategy. Many of the nurses had not realised the impact their behaviour was having on the organisation and their colleagues but there were also staff members who felt that talking to them about their absenteeism was 'picking' on them and this usually had a negative effect on management—employee relationships.

Conclusion

Although there has been some decrease in absence rates, no single strategy or combination of strategies has had a significant impact on absenteeism per se. Notwithstanding the disappointing results, it is our contention that the strategies were not in vain. A shared ownership of absenteeism and a collaborative approach to problem solving has facilitated improved cooperation and communication between management and staff. It is our belief that this improvement alone, while not tangibly measurable, has increased the ability of management to manage the effects of absenteeism more effectively since this study.


This article has been adapted and condensed from the article by G. William and K. Slater (1996), 'Absenteeism in nursing: A longitudinal study', Asia Pacific Journal of Human Resources, 34(1): 111-21. Names and other details have been changed and report findings may have been given a different emphasis from the original. We are grateful to the authors and Asia Pacific Journal of Human Resources for allowing us to use the material in this way.

 

 

 

Highlight Highlight Highlight|Remove Highlight|Dictionary

Recovering a damaged reputation

In 2009, it was revealed that some of the information published by the University of East Anglia’s Climatic Research Unit (CRU) in the UK, concerning climate change, had been inaccurate. Furthermore, it was alleged that some of the relevant statistics had been withheld from publication. The ensuing controversy affected the reputation not only of that institution, but also of the Intergovernmental Panel on Climate Change (IPCC), with which the CRU is closely involved, and of climate scientists in general. Even if the claims of misconduct and incompetence were eventually proven to be largely untrue, or confined to a few individuals, the damage was done. The perceived wrongdoings of a few people had raised doubts about the many.

The response of most climate scientists was to cross their fingers and hope for the best, and they kept a low profile. Many no doubt hoped that subsequent independent inquiries into the IPCC and CRU would draw a line under their problems. However, although these were likely to help, they were unlikely to undo the harm caused by months of hostile news reports and attacks by critics.

The damage that has been done should not be underestimated. As Ralph Cicerone, the President of the US National Academy of Sciences, wrote in an editorial in the journal Science: ‘Public opinion has moved toward the view that scientists often try to suppress alternative hypotheses and ideas and that scientists will withhold data and try to manipulate some aspects of peer review to prevent dissent.’ He concluded that ‘the perceived misbehavior of even a few scientists can diminish the credibility of science as a whole.’

An opinion poll taken at the beginning of 2010 found that the proportion of people in the US who trust scientists as a source of information about global warming had dropped from 83 percent, in 2008, to 74 percent. Another survey carried out by the British Broadcasting Corporation in February 2010 found that just 26 percent of British people now believe that climate change is confirmed as being largely human-made, down from 41 percent in November 2009.

Regaining the confidence and trust of the public is never easy. Hunkering down and hoping for the best - climate science’s current strategy - makes it almost impossible. It is much better to learn from the successes and failures of organisations that have dealt with similar blows to their public standing.

In fact, climate science needs professional help to rebuild its reputation. It could do worse than follow the advice given by Leslie Gaines-Ross, a ‘reputation strategist’ at Public Relations (PR) company Webef Shandwick, in her recent book Corporate Reputation: 12 Steps to Safeguarding and Recovering Reputation.Gaines-Ross’s strategy is based on her analysis of how various organisations responded to crises, such as desktop-printer firm Xerox, whose business plummeted during the 1990s, and the USA’s National Aeronautics and Space Administration (NASA) after the Columbia shuttle disaster in 2003.

The first step she suggests is to ‘take the heat - leader first’. In many cases, chief executives who publicly accept responsibility for corporate failings can begin to reverse the freefall of their company’s reputations, but not always. If the leader is held at least partly responsible for the fall from grace, it can be almost impossible to convince critics that a new direction can be charted with that same person at the helm.

This is the dilemma facing the heads of the IPCC and CRU. Both have been blamed for their organisations’ problems, not least for the way in which they have dealt with critics, and both have been subjected to public calls for their removal. Yet both organisations appear to believe they can repair their reputations without a change of leadership.

The second step outlined by Gaines-Ross is to ‘communicate tirelessly’. Yet many climate researchers have avoided the media and the public, at least until the official enquiries have concluded their reports. This reaction may be understandable, but it has backfired. Journalists following the story have often been unable to find spokespeople willing to defend climate science. In this case, ‘no comment’ is commonly interpreted as an admission of silent, collective guilt.

Remaining visible is only a start, though; climate scientists also need to be careful what they say. They must realise that they face doubts not just about their published results, but also about their conduct and honesty. It simply won’t work for scientists to continue to appeal to the weight of the evidence, while refusing to discuss the integrity of their profession. The harm has been increased by a perceived reluctance to admit even the possibility of mistakes or wrongdoing.

The third step put forward by Gaines-Ross is ‘don’t underestimate your critics and competitors’. This means not only recognising the skill with which the opponents of climate research have executed their campaigns through Internet blogs and other media, but also acknowledging the validity of some of their criticisms. It is clear, for instance, that climate scientists need better standards of transparency, to allow for scrutiny not just by their peers, but also by critics from outside the world of research.

It is also important to engage with those critics. That doesn’t mean conceding to unfounded arguments which are based on prejudice rather than evidence, but there is an obligation to help the public understand the causes of climate change, as well as the options for avoiding and dealing with the consequences.

To begin the process of rebuilding trust in their profession, climate scientists need to follow these three seeps. But that is just the start. Gaines-Ross estimates that it typically takes four years for a company to rescue and restore a broken reputation.

Winning back public confidence is a marathon, not a sprint, but you can’t win at all if you don’t step up to the starting line.

 

 

IMPLEMENTING THE CYCLE OF SUCCESS: A CASE STUDY

Within Australia, Australian Hotels Inc (AHI) operates nine hotels and employs over 2000 permanent full-time staff, 300 permanent part-time employees and 100 casual staff. One of its latest ventures, the Sydney Airport hotel (SAH), opened in March 1995. The hotel is the closest to Sydney Airport and is designed to provide the best available accommodation, food and beverage and meeting facilities in Sydney's southern suburbs. Similar to many international hotel chains, however, AHI has experienced difficulties in Australia in providing long-term profits for hotel owners, as a result of the country's high labour-cost structure. In order to develop an economically viable hotel organisation model, AHI decided to implement some new policies and practices at SAH.

The first of the initiatives was an organisational structure with only three levels of management - compared to the traditional seven. Partly as a result of this change, there are 25 per cent fewer management positions, enabling a significant saving. This change also has other implications. Communication, both up and down the organisation, has greatly improved. Decision-making has been forced down in many cases to front-line employees. As a result, guest requests are usually met without reference to a supervisor, improving both customer and employee satisfaction.

The hotel also recognised that it would need a different approach to selecting employees who would fit in with its new policies. In its advertisements, the hotel stated a preference for people with some 'service' experience in order to minimise traditional work practices being introduced into the hotel. Over 7000 applicants filled in application forms for the 120 jobs initially offered at SAH. The balance of the positions at the hotel (30 management and 40 shift leader positions) were predominantly filled by transfers from other AHI properties.

A series of tests and interviews were conducted with potential employees, which eventually left 280 applicants competing for the 120 advertised positions. After the final interview, potential recruits were divided into three categories. Category A was for applicants exhibiting strong leadership qualities, Category C was for applicants perceived to be followers, and Category B was for applicants with both leader and follower qualities. Department heads and shift leaders then composed prospective teams using a combination of people from all three categories. Once suitable teams were formed, offers of employment were made to team members.

Another major initiative by SAH was to adopt a totally multi-skilled workforce. Although there may be some limitations with highly technical jobs such as cooking or maintenance, wherever possible, employees at SAH are able to work in a wide variety of positions. A multi-skilled workforce provides far greater management flexibility during peak and quiet times to transfer employees to needed positions. For example, when office staff are away on holidays during quiet periods of the year, employees in either food or beverage or housekeeping departments can temporarily.

The most crucial way, however, of improving the labour cost structure at SAH was to find better, more productive ways of providing customer service. SAH management concluded this would first require a process of 'benchmarking'. The prime objective of the benchmarking process was to compare a range of service delivery processes across a range of criteria using teams made up of employees from different departments within the hotel which interacted with each other. This process resulted in performance measures that greatly enhanced SAH's ability to improve productivity and quality.

The front office team discovered through this project that a high proportion of AHI Club member reservations were incomplete. As a result, the service provided to these guests was below the standard promised to them as part of their membership agreement. Reducing the number of incomplete reservations greatly improved guest perceptions of service.

In addition, a program modelled on an earlier project called 'Take Charge' was implemented. Essentially, Take Charge provides an effective feedback loop horn both customers and employees. Customer comments, both positive and negative, are recorded by staff. These are collated regularly to identify opportunities for improvement. Just as importantly, employees are requested to note down their own suggestions for improvement. (AHI has set an expectation that employees will submit at least three suggestions for every one they receive from a customer.)

Employee feedback is reviewed daily and suggestions are implemented within 48 hours, if possible, or a valid reason is given for non-implementation. If suggestions require analysis or data collection, the Take Charge team has 30 days in which to address the issue and come up with recommendations.

Although quantitative evidence of AHI's initiatives at SAH are limited at present, anecdotal evidence clearly suggests that these practices are working. Indeed AHI is progressively rolling out these initiatives in other hotels in Australia, whilst numerous overseas visitors have come to see how the program works.


This article has been adapted and condensed fem the article by R Carter (19%), 'Implementing the cycle of success: A case study of the Sheraten Pacific Division', Asia Pacific Journal of Human Resources, 34(3): 111-23. Names and other details have been changed and report findings may have been given a different emphasis from the original. W eare grateful to Asia Pacific Journal of Human Resources for allowing us to use, file material in this way.

 

 

Motivating Employees under Adverse Conditions

THE CHALLENGE

It is a great deal easier to motivate employees in a growing organisation than a declining one. When organisations are expanding and adding personnel, promotional opportunities, pay rises, and the excitement of being associated with a dynamic organisation create feelings of optimism. Management is able to use the growth to entice and encourage employees. When an organisation is shrinking, the best and most mobile workers are prone to leave voluntarily. Unfortunately, they are the ones the organisation can least afford to lose - those with the highest skills and experience. The minor employees remain because their job options are limited.

Morale also suffers during decline. People fear they may be the next to be made redundant. Productivity often suffers, as employees spend their time sharing rumours and providing one another with moral support rather than focusing on their jobs. For those whose jobs are secure, pay increases are rarely possible. Pay cuts, unheard of during times of growth, may even be imposed. The challenge to management is how to motivate employees under such retrenchment conditions. The ways of meeting this challenge can be broadly divided into six Key Points, which are outlined below.

KEY POINT ONE

There is an abundance of evidence to support the motivational benefits that result from carefully matching people to jobs. For example, if the job is running a small business or an autonomous unit within a larger business, high achievers should be sought. However, if the ob to be filled is a managerial post in a large bureaucratic organisation, a candidate who las a high need for power and a low need for affiliation should be selected. Accordingly, iigh achievers should not be put into jobs that are inconsistent with their needs. High achievers will do best when the job provides moderately challenging goals and where there is independence and feedback. However, it should be remembered that not everybody is motivated by jobs that are high in independence, variety and responsibility.

KEY POINT TWO

The literature on goal-setting theory suggests that managers should ensure that all employees have specific goals and receive comments on how well they are doing in those goals. For those with high achievement needs, typically a minority in any organisation, the existence of external goals is less important because high achievers are already internally motivated. The next factor to be determined is whether the goals should be assigned by a manager or collectively set in conjunction with the employees. The answer to that depends on perceptions of goal acceptance and the organisation's culture. If resistance to goals is expected, the use of participation in goal-setting should increase acceptance. If participation is inconsistent with the culture, however, goals should be assigned. If participation and the culture are incongruous, employees are likely to perceive the participation process as manipulative and be negatively affected by it.

KEY POINT THREE

Regardless of whether goals are achievable or well within management's perceptions of the employee's ability, if employees see them as unachievable they will reduce their effort. Managers must be sure, therefore, that employees feel confident that their efforts can lead to performance goals. For managers, this means that employees must have the capability of doing the job and must regard the appraisal process as valid.

KEY POINT FOUR

Since employees have different needs, what acts as a reinforcement for one may not for another. Managers could use their knowledge of each employee to personalise the rewards over which they have control. Some of the more obvious rewards that managers allocate include pay, promotions, autonomy, job scope and depth, and the opportunity to participate in goal-setting and decision-making.

KEY POINT FIVE

Managers need to make rewards contingent on performance. To reward factors other than performance will only reinforce those other factors. Key rewards such as pay increases and promotions or advancements should be allocated for the attainment of the employee's specific goals. Consistent with maximising the impact of rewards, managers should look for ways to increase their visibility. Eliminating the secrecy surrounding pay oy openly communicating everyone's remuneration, publicising performance bonuses and allocating annual salary increases in a lump sum rather than spreading them out over an entire year are examples of actions that will make rewards more visible and potentially more motivating.

KEY POINT SIX

The way rewards are distributed should be transparent so that employees perceive that rewards or outcomes are equitable and equal to the inputs given. On a simplistic level, experience, abilities, effort and other obvious inputs should explain differences in pay, responsibility and other obvious outcomes. The problem, however, is complicated by the existence of dozens of inputs and outcomes and by the fact that employee groups place different degrees of importance on them. For instance, a study comparing clerical and production workers identified nearly twenty inputs and outcomes, me clerical workers considered factors such as quality of work performed and job knowledge near the top of their list, but these were at the bottom of the production workers' list. Similarly, production workers thought that the most important inputs were intelligence and personal involvement with task accomplishment, two factors that were quite low in the importance ratings of the clerks. There were also important, though less dramatic, differences on the outcome side. For example, production workers rated advancement very highly, whereas clerical workers rated advancement in the lower third of their list. Such findings suggest that one person's equity is another's inequity, so an ideal should probably weigh different inputs and outcomes according to employee group.