How do avatars and simulation work?

by Scott Zimmer, JD

Principal terms

– Animation variables (avars): defined variables used in computer animation to control the movement of an animated figure or object.

– Keyframing: a part of the computer animation process that shows, usually in the form of a drawing, the position and appearance of an object at the beginning of a sequence and at the end.

– Modeling: reproducing real-world objects, people, or other elements via computer simulation.

– Render farm: a cluster of powerful computers that combine their efforts to render graphics for animation applications.

– Virtual reality: the use of technology to create a simulated world into which a user may be immersed through visual and auditory input.



Avatars and simulation are elements of virtual reality (VR), which attempts to create immersive worlds for computer users to enter. Simulation is the method by which the real world is imitated or approximated by the images and sounds of a computer. An avatar is the personal manifestation of a particular person. Simulation and VR are used for many applications, from entertainment to business.

Virtual Worlds

Computer simulation and virtual reality (VR) have existed since the early 1960s. While simulation has been used in manufacturing since the 1980s, avatars and virtual worlds have yet to be widely embraced outside gaming and entertainment. VR uses computerized sounds, images, and even vibrations to model some or all of the sensory input that human beings constantly receive from their surroundings every day. Users can de ne the rules of how a VR world works in ways that are not possible in everyday life. In the real world, people cannot y, drink re, or punch through walls. In VR, however, all of these things are possible, because the rules are defined by human coders, and they can be changed or even deleted. This is why users’ avatars can appear in these virtual worlds as almost anything one can imagine—a loaf of bread, a sports car, or a penguin, for example. Many users of virtual worlds are drawn to them because of this type of freedom.

Because a VR simulation does not occur in physical space, people can “meet” regardless of how far apart they are in the real world. Thus, in a company that uses a simulated world for conducting its meetings, staff from Hong Kong and New York can both occupy the same VR room via their avatars. Such virtual meeting spaces allow users to convey nonverbal cues as well as speech. This allows for a greater degree of authenticity than in telephone conferencing.

Mechanics of Animation

The animation of avatars in computer simulations often requires more computing power than a single workstation can provide. Studios that produce animated films use render farms to create the smooth and sophisticated effects audiences expect.

Before the rendering stage, a great deal of effort goes into designing how an animated character or avatar will look, how it will move, and how its textures will behave during that movement. For example, a fur-covered avatar that moves swiftly outdoors in the wind should have a furry or hairy texture, with fibers that appear to blow in the wind. All of this must be designed and coordinated by computer animators. Typically, one of the first steps is keyframing, in which animators decide what the starting and ending positions and appearance of the animated object will be. Then they de- sign the movements between the beginning and end by assigning animation variables (avars) to different points on the object. This stage is called “in-betweening,” or “tweening.” Once avars are assigned, a computer algorithm can automatically change the avar values in coordination with one another. Alternatively, an animator can change “in-between” graphics by hand. When the program is run, the visual representation of the changing avars will appear as an animation.

In general, the more avars specified, the more detailed and realistic that animation will be in its movements. In an animated lm, the main characters often have hundreds of avars associated with them. For instance, the 1995 lm Toy Story used 712 avars for the cowboy Woody. This ensures that the characters’ actions are lifelike, since the audience will focus attention on them most of the time. Coding standards for normal expressions and motions have been developed based on muscle movements. The MPEG-4 international standard includes 86 face parameters and 196 body parameters for animating human and humanoid movements. These parameters are encoded into an animation le and can affect the bit rate (data encoded per second) or size of the le.

Educational Applications

Simulation has long been a useful method of training in various occupations. Pilots are trained in flight simulators, and driving simulators are used to prepare for licensing exams. Newer applications have included training teachers for the classroom and improving counseling in the military. VR holds the promise of making such vocational simulations much more realistic. As more computing power is added, simulated environments can include stimuli that better approximate the many distractions and de- tailed surroundings of the typical driving or flying situation, for instance.

Vr in 3-d

Most instances of VR that people have experienced so far have been two-dimensional (2-D), occurring on a computer or movie screen. While entertaining, such experiences do not really capture the concept of VR. Three-dimensional (3-D) VR headsets such as the Oculus Rift may one day facilitate more lifelike business meetings and product planning. They may also offer richer vocational simulations for military and emergency personnel, among others.


Chan, Melanie. Virtual Reality: Representations in Contemporary Media. New York: Bloomsbury, 2014. Print.

Gee, James Paul. Unified Discourse Analysis: Language, Reality, Virtual Worlds, and Video Games. New York: Routledge, 2015. Print.

Griffiths, Devin C. Virtual Ascendance: Video Games and the Remaking of Reality. Lanham: Rowman, 2013. Print.

Hart, Archibald D., and Sylvia Hart Frejd. The Digital Invasion: How Technology Is Shaping You and Your Relationships. Grand Rapids: Baker, 2013. Print.

Kizza, Joseph Migga. Ethical and Social Issues in the Information Age. 5th ed. London: Springer, 2013. Print.

Lien, Tracey. “Virtual Reality Isn’t Just for Video Games.” Los Angeles Times. Tribune, 8 Jan. 2015. Web. 23 Mar. 2016.

Parisi, Tony. Learning Virtual Reality: Developing Immersive Experiences and Applications for Desktop, Web, and Mobile. Sebastopol: O’Reilly, 2015. Print.


What is ice cream in science? How to make a perfect ice cream?

by Clarissien Ramongolalaina

While the true origins of ice cream are somewhat unclear, there are a number of stories describing how ice cream was first created in ancient China via Marco Polo and other tales of the Roman Emperor Nero sending slaves to the mountains to collect snow for an ice creamlike treat. The rise in popularity and availability of ice cream certainly correlates to key scientific advances, particularly the concept of freezing point depression. The principle of freezing point depression states that adding a salt (solute) to an ice water mixture (solvent) reduces the temperature at which the mixture freezes.

Ice cream is far from being a solid frozen bowl of cream. In fact, ice cream is a mixture of solids (ice and partially frozen milk fat), liquid (unfrozen cream and sugar water), and pockets of air trapped in the freezing mixture by mixing. These three phases are scattered among each other forming a colloid. A colloid is a mixture with properties of homogeneous and heterogeneous mixtures and is formally defined as a microscopically dispersed mixture in which dispersed particles do not settle out. Ice cream can be described as an emulsion and a foam, both of which are examples of colloids. The formation of an emulsion (the solid phase of frozen fat globules and ice water distributed through the liquid phase of sugar water and cream) is typically unstable, but proteins and lipids coat the fats and stabilize the mixture from collapsing into two separate fat and water phases. These mediators between fat and water phases are called emulsifiers. The foam nature of ice cream is due to the trapped pockets of air created as the freezing cream is mixed. Overrun is the increase in volume of the ice cream before and after mixing due to this trapped air. Some ice cream can have an overrun of nearly twice the volume of the ingredients before freezing and mixing.

The ratio of each phase of the colloid is critical for the mouthfeel and creaminess of the ice cream. Too much fat and the ice cream will have the consistency of butter, too much sugar or milk solids creates a weak ice cream, while the emulsifiers limit the amount of crystals keeping the ice cream from becoming crunchy.

Federal standards (21 CFR § 135.110) require that ice cream contain a minimum of 10% milk fat and 20% milk solids—the solids refer to proteins and sugars like lactose or sucrose. Most ice creams include stabilizing emulsifiers to minimize the formation of ice and fat crystals that decrease the taste of ice cream. Fat is important for taste, providing both a creamy feel to the tongue and sweetness. The proteins and sugar add body or chewiness to the ice cream. A number of different stabilizers or emulsifiers can be found in ice cream. Added whey protein or gelatin protein from muscle tissue is used to coat the fat and provide the body. Custards use egg yolks, which have the phospholipid lecithin as an emulsifier. Another commonly used emulsifier is Polysorbate 80. This is a complex carbohydrate with a long unsaturated fatty acid bonded to it. As an emulsifier in ice cream, Polysorbate 80 can be found in fairly high concentration where it keeps the ice cream scoopable. The carbohydrate portion of the molecule interacts with water and protein, while the fatty acid tail of Polysorbate 80 hydrophobically interacts with the fat globules. This coating keeps the fat and water phases together. Stabilizers include complex carbohydrates (starches and gums) and are commonly found in the ingredient list of commercial ice cream. A common additive used to reduce the formation of ice crystals is alginate. Also a complex carbohydrate, alginate is isolated from the cell walls of algae. Alginate contains many OH functional groups and readily binds water through hydrogen bonding. The extensive hydrogen bonding of alginate limits the flow of water and forms a gel that acts as thickener. The organization of the water–carbohydrate complex also defeats the formation of ice crystals. The cell wall carbohydrate from red algae (seaweed), carrageenan, is used in place of alginate in many foods.

Ice cream can come in many confusing grades and styles. Superpremium and premium ice cream has low overrun and high fat content with the best quality ingredients. Standard ice cream has more overrun (air) than superpremium or premium ice cream and meets the minimum requirements of 21 CFR § 135.110. Fatfree ice cream has less fat than the CFR standard and must have less than 0.5% fat per serving. In contrast, light ice cream is a description of the amount of calories coming from fat. Light ice creams must have less than half of its total calories per serving from fat. Low and reduced fat ice creams fall somewhere between light and fatfree in their fat composition. Standard vanilla ice creams, also called Philadelphiastyle ice cream, differ from French vanilla in that Frenchstyle ice creams, like custards, use egg yolks as an emulsifier, while standard or Philadelphiastyle ice creams (also called New York) use no egg or just the egg whites. Gelato is a frozen ice creamlike dessert that has higher fat and almost no overrun. Sherbet stretches the ice creamlike properties with fruit juice and some milk fat, whereas sorbet is not an ice cream at all! Sorbet contains no milk or cream and is instead a frozen puree of fruit with added alcohol or wine to reduce freezing temperature. Soft serve ice cream is low fat (3–6%) with up to 60% air overrun.

Making ice cream is pretty straightforward and while an ice cream maker helps, it can be done without a machine. A simple base recipe is a combination of milk, heavy cream, sugar, and salt. From this base, flavorings including vanilla and chocolate can be added and are as diverse as there are ice cream creations. Richer custard or Frenchstyle ice creams include adding egg yolks as emulsifiers followed by heating and cooling the mixture. Cream and milk are added to a mixture of egg yolk and sugar, which is then cooled before freezing. With your ice cream mixture complete, you are ready to freeze it, but now comes the work! Air must be introduced, crystallization must be limited, and the fat and liquid phases must be kept together while freezing. This is all accomplished by mixing. Mixing can be accomplished by hand by placing the liquid ice cream into a larger container of ice, water, and salt. The salted ice bath will have a lower temperature than ice water alone allowing the sugar and fat water “ice cream” to freeze. Ice cream makers maintain a constant mixing as the liquid ice cream mixture begins to freeze. Once frozen, the ice cream can be eaten or left in the freezer to “harden.” At freezer temperature (−4°F/−20°C) about only 75% of the water is frozen, and the rest is a liquid sugar–water mixture. Rapid and deep freezing causes most of the liquid water to freeze without forming unwanted crystals. Partial thaw and refreeze cycles will increase the amount of the liquid phase, and larger crystals will form, giving the ice cream an offtaste and crunchy tooth feel (texture).    

This article was originally from Here  

I love chocolate! I wanna know a little more about it!

by Clarissien Ramongolalaina

Chocolate is arguably one of the world’s most loved foods. Over 600 different types of molecules that contribute to its acidity, bitterness, astringency, sweetness, creaminess, and chocolate flavor and aroma make chocolate one of the most complex, flavorful foods that is known. Interestingly, however, chocolate comes from a bean that is itself quite unpalatable; it is crunchy, astringent, bitter, and essentially aromaless. Thus, the realization of its full potential as a delectable food has taken hundreds of years by numerous peoples. The details of the very early history of chocolate are not well established. As far as we know, the cocoa tree, Theobroma cacao, first appeared in South American tropical rain forests, and its potential as a food was realized by Mayan, Inca, and Aztec civilizations.

Around 700ad, the tree was carried northward (toward Mexico) by the Mayans and was cultivated and utilized as a food source throughout that region, as the beans contained fat, starch, and protein. The tree only flourishes in areas that are 20° north and south of the equator, so cultivation of the tree was (and still is) quite limited. However, during this time, the beans were exported northward into what is now the United States and were so highly valued that they were used as a form of currency.

The word chocolate is derived from the Aztecs as “xocolatl”; xocolatl was the name of a drink in which roasted beans were simmered in hot water, flavored with red pepper and vanilla, and thickened with ground corn. The beans were also used as a spice to flavor meat dishes in the Aztec civilization, similar to moles that are used in Mexican cuisine still today

The detailed history of chocolate becomes more documented when the first cocoa beans were brought to Europe in 1502. The beans were seized during the fourth and last voyage of Christopher Columbus near an island off the coast of what is now Honduras, unimpressively mistaken for almonds. However, the Europeans had little understanding of how to use the beans and primarily prepared a drink similar to that of the Aztecs, with less red pepper. Eventually, the chocolate drink was modified and sweetened by mixing the cocoa beans with milk, sugar, and eggs in Spain, and it gradually spread throughout Europe, arriving in England in 1650 where it became particularly popular and was the basis for the famous chocolate houses of London.

Chocolate drinks continued to be developed and adored throughout the seventeenth and eighteenth centuries in Europe; however there was an undesirable layer of fat within the drink due to the highfat content of the beans.

The first report of the use of a press to remove fat from the beans occurs in a French treatise, published in 1678. However, the credit for the invention of the process for extracting cocoa butter from the cocoa bean to make cocoa powder is given to the Dutchman, Coennraad van Houten, in 1828. The ability to separate the cocoa butter from the cocoa solids was a breakthrough in the culinary world and allowed the production of solid chocolate by an English company in 1847, and 2 years later, milk chocolate was developed by a Swiss firm. Interestingly, the love of chocolate continues to live on in both Switzerland and England. The Swiss ranked first in annual chocolate consumption, at 11.9 kg per person in 2012, while the Irish and English ranked second and third, respectively, consuming 9.9 and 9.5kg per person. The average American eats 5.5 kg of chocolate annually, which corresponds to 128 Hershey’s milk chocolate bars. We love our chocolate!

Fortunately, chocolate is not only delicious to eat, but there is also a myriad of cool science that is fundamental to its flavor, aroma, and behavior. In this chapter, you will learn about the steps and science involved in the production and manufacture of chocolate from cacao beans, the favorable or unfavorable transformations that occur when you work with chocolate in the kitchen, and the different types of chocolate. Let’s take a walk into Willy Wonka’s factory to learn more about the science of chocolate.   

This article was originally from Here