Universidad Tecnologica Nacional
   Facultad Regional Buenos Aires

Anónimo  [Entrar]  [Ayuda] 



























MODELOS DE EXAMEN FINAL

 

Inglés Técnico -Nivel II

 

Inglés Técnico -Nivel II (Sistemas)

Autor: David A. Patterson.

Smaller, Faster, Cheaper

Two inventions sparked the computer revolution.

The first was the so called stored program concept. Every computer system since the late 1940s has adhered to this model, which prescribes a processor for crunching numbers and a memory for storing both data and programs. The advantage in such a system is that, because stored programs can be easily interchanged the same hardware can perform a variety of tasks. Had computers not been given this flexibility, it is probable that they would not have met with such widespread use. Also, during the late 1940s, researchers invented the transistor. These silicon switches were much smaller than the vacuum tubes used in early circuitry. As such, they enabled workers to create smaller-and faster- electronics.

More than a decade passed before the stored program design and transistors were brought together in the same machine, and it was not until 1971 that the most significant pairing –the Intel 4004-came about. This processor was the first to be built on a single silicon chip, which was no larger than a child's fingernail. Because of its tiny size, it was dubbed a microprocessor. And because it was a single chip, the Intel 4004 was the first processor that could be made inexpensively in bulk.

The method manufactures have used to mass-produce microprocessors since then is much blike a baking a pizza: the dough, in this case silicon, starts thin and round. Chemical toppings are added, and the assembly goes into an oven. Heat transforms. the toppings into transistors, conductors and insulators. Not surprisingly, the process -which is repeated perhaps 20 times -is considerably more demanding than baking a pizza. One dust particle can damage the tiny transistors. So, too, vibrations from a passing truck can .throw the ingredients out of alignment, running the end product. But provided that does not happen, the resulting wafer is divided into individual pieces, called chips, and served to customers.

Although this basic recipe is still followed, the production line has made ever cheaper, faster chips over time by churning out larger wafers and smaller transistors. This trend reveals an important principle of microprocessor economics: the more chips are made per wafer, the less expensive they are.

 

A lemon law for software (Sistemas)

If Microsoft made cars instead of computer programs, product liability suits might by now have driven it out of business. Should software makers be made more accountable for damage caused by faulty programs?

Events of the past six months have shown just how fragile the industrial world’s technological infrastructure can be. No question that terrorism can bring business districts, power grids, computer networks or air-traffic-control systems to their knees. But so, too, can stupidity, carelessness and haste. Indeed, from Titanic to Chernobyl -and in nine out of ten accidents in the air and on the road- human error has accounted for vastly more fatalities than malfunctioning parts or sabotage. Unfortunately, that is about to get even worse.

There is no escaping the trend towards replacing slow, cumbersome yet ultimately reliable bits of machinery with cheap, quick and compact “fly-by-wire” controls that are managed entirely by software. All well and good - except that there is no such thing as a bug-free piece of software. Even experienced programmers make on average one error for every ten lines of code. And all it takes is three or four defects per 1,000 lines of code for a program to start doing unpredictable things. With commercial software containing not thousands but increasingly millions of lines of code, the potential for disaster is all too clear.

Software defects would be bad enough if all they did was require the hardware to be reset. But defects invariably provide security holes for malicious hackers to exploit. Making matters worse, instead of working to close security holes in their existing products, software firms tend to cram more and more features into their programs to entice customers to buy the latest upgrades.



 

Inglés Técnico -Nivel II (Electrónica)

Autor: Vincent W.S. Cahn.

All-Optical Networks

Fiber optics will become more efficient as fight waves replace electrons for processing signals in communications networks.

Contemporary fiber-optic networks transmit voice, video and data at speeds 10 to 100 times faster than the standard copper wiring that has been used in telecommunications  for over a century. They have, nonetheless, realized only a small fraction of the promise of the technology.

To fulfill its potential, fiber optics must do more than simply replace copper telephone wiring with thin, cylindrical conduits of glass that guide light. Optical transmission must in fact go beyond the limitations imposed by the electronics technology that preceded it.

In contemporary fiber-optic networks, each time a light pulse is amplified, switched, inserted into or removed from the network, it must be changed into a a stream of electrons for processing. This optoelectronic conversion can become an impediment in very high speed communications. A network must be saddled with more expensive and complex electronics, and it becomes more difficult to process the small pulses of light needed to transmit tens of gigabits ( a gigabit is a billion bits) of digital information in a second's time. Above a certain transmission speed about 50 gigabits per second -electronic equipment will find it hard to handle this constant back-and-forth transformation between electrons and light waves.

It would be simpler, faster and more economical to transfer optical signal from one end of a network to the other by using the properties of the light wave itself to route the transmission along different pathways through the network. The signal would become electronic only when it moved into the circuits of the computer for which it is designed or else into a lower-speed network that still employs  electronic processing of signals.

This all-optical network would build on the successes of fiber-optics net-works currently deployed commercially that rely on optoelectronic components for signal processing. Commercial fiber-optic cables owned by long-distance telecommunications companies, for example, transfer telephone calls and video images as digital bits, as many as 2.5 gigabits each second per fiber.


STIMULATED EMISSION (Electrónica)

Although nanolasers push the boundaries of modern physics, the devices work much like their early ancestors, a contraption fashioned from a rod of dark ruby more than 35 years ago. Essentially, a lasing material- for example, a gas such as helium or neon, or a crystalline semiconductor- is sandwiched between two mirrors. The substance is “pumped” with light or electricity. The process excites the electrons in the material to hop from lower to higher energy levels. When the electrons return to the lower stations, they produce light which is reflected between the mirrors.

The bouncing photons trigger other “excited” electrons - those in higher energy states - to emit identical photons, much like firecrackers that pop and set off other firecrackers. This chain reaction is called stimulated emission. (Hence the name “laser,” which is an acronym for “light amplification by stimulated emission of radiation.”) As the number of photons grows, they become part of a communal wave that intensifies, finally bursting through one of the mirrors in a concentrated, focused beam.

But not all the photons take part in this wave. In fact, many are emitted spontaneously, apart from the chain reaction. In a large space - to a subatomic particle, the size of a typical laser cavity is immense - photons are relatively free to do what they want. Thus, many of the free-spirited photons are literally on a different wavelength, and they can scatter in all directions, often hitting the sides of the laser and generating unwanted heat instead of bouncing between the mirrors. For some types of lasers, only one photon in 10,000 is useful.

Because of this enormous waste, a certain threshold of energy is necessary to ensure that the number of excited electrons is large enough to induce and maintain stimulated emission. The requirement is analogous to the minimum amount of heat needed to bring a pot of water to boil. If the hurdle is not cleared, the laser will fail to attain the self- sustaining chain reaction crucial to its operation. This obstacle is why semiconductor lasers have required relatively high currents to work, in contrast to silicon transistors, which are much more frugal. But if semiconductor lasers could stop squandering energy, they could become competitive with their electronic counterparts for a host of applications, including their use in computers.

 


Inglés Técnico -Nivel II (Metalúrgica)

HARDNESS

John Symonds, Expanded by Staff in &Mark´s Standard Handbook for Mechanical Engineers-Tenth Edition". Avellone/BAUMEISTER. New York, p. 5-12.

John Symonds Fellow Engineer (Retired), Oceanic Division, Westinghouse Electric Corporation

Hardnesss has been variously defined as resistance to local penetration, to  scratching, to machining, to wear or abrasion and to yielding. The multiplicity of definitions, and corresponding multiplicity of hardness-measuring instruments, together with the lack of fundamental defintion, indicates that hardness may not be a fundamental property of a material but rather a composite one including strength, work hardening, true tensile strength, modulus of elasticity, and others.

Scratch hardness is measured by Mohs scale  which is so arranged that each mineral will scratch the mineral of the next lower number. In recent mineralogical work, and in certain microscopic metallurgical work, jeweled scratching points either with a set of  load or else loaded to give a set width of scratch have been used. Hardness in its relation to machinability and to wear and abrasion is generally dealt with in direct machining or wear tests, and little attempt is made to separate hardness itself, as a numerically expressed quantity, from the results of such tests.

The resistance to localized penetration, or indentation hardness is widely used industrially as a measure of hardness, and indirectly as an indicator of other desired properties in a manufactured  product. The indentation tests described below are essentially nondestructive, and in most applications may be considered nonmarring so that they may be applied to each piece produced; and through the empirical relationship of hardness to such properties as tensile strength, fatigue strength,  impact strength, pieces likely to be deficient in the latter properties may be detected and rejected.

 


 

Inglés Técnico -Nivel II (Eléctrica)

Electricity and Magnetism

All atoms contain negatively charged electrons and positively charged protons along with (with the exception of hydrogen) neutral neutrons. If there were no way to separate these charges there would be no such phenomena as electricity or magnetism. Static electricity consists of an unbalance of positive and negative charges. An abundunce of electrons produces a net negative charge, and an abundance of protons (or a deficiency of electrons) a positive. If charges flow through space or along a conductor, an electric current results. Electric currents produce magnetism.

The unit of charge would logically be the charge on one electron, first measured by Nobel laureate Robert Millikan in his famous oil drop experiment. This is such a tiny amount of charge, however, that it would not make a good standard unit.

Luckily for students and scientists alike, electrical units were not defined until after the metric system began. There never have been any archaic units in this field. All of the electrical units are metric and they fit together in a single, logical system. The beginning of the system is in the definition of the unit of electric current, the ampere.

The ampere is the amount of electric current flowing in two electrical conductors (wires) that will exert a magnetic force of 200 nanonewtons per meter of length on each other when the wires are spaced one meter apart. This is the fundamental beginning of all of the electrical units.

One ampere flowing for one second of time passes a coulomb of charge along the wire. A coulomb of negative charge is that of 6 280 000 000 000 000 000 electrons. (R. A. Millikan) If 96 500 coulombs (one faraday) of charge are used to electroplate a metal such as silver, a mole of the metal will be deposited. This is as many grams of the metal as its atomic weight (for silver this is 108 grams, or about four ounces). Metals with a valence of two (two loose electrons per atom), such as copper, require two faradays to plate a mole, etc. This kind of relationship (electrochemistry) leads to the easy determination of atomic weights of all of the elements. (H G J Moseley)

 

Types of Fuel Cells (Eléctrica)

Different types of fuel cells have different operational temperatures and are made of different materials. They are characterized according to electrolytes or ionic conductor types. Pairs of cells are electrically connected in series by bipolar plates. This makes it possible to construct cell stacks with voltages that are higher than those of individual cells (approximately 1 V). The bipolar plates usually have gas channels that supply the electrodes and drain off surplus gases and reaction products. The major fuel cell types are:

  1. Alkaline Fuel Cells (AFCs); these operate on pure hydrogen and oxygen because they are sensitive to CO2. Therefore, they are limited to marginal markets such as space exploration.
  2. Polymer Electrolyte Membrane Cells (PEMs) operate on air and are thus applicable to many fields, especially mobile uses.
  3. Phosphorie acid Fuel Cells (PAFCs) are the most developed fuel cell types for mobile applications.

In Japan, Europe and the United States several experimental plants based on these three fuel cell types are already in operation, some of them providing power for gas companies. These plants typically produce 200 kW. A plant producing 11 MW is currently being tested in Japan. The energy costs associated with these plants are still twice those for conventional decentralized energy production systems, with PAFC cells being the most economical.

There are also two high-temperature fuel cell types; these are:

  1. The Molten Carbonate Fuel Cell (MCFC), which is operated at 650 °C, and
  2. The Solid Oxide Fuel Cell (SOFC), which has an operational temperature between 800° and 1000° C.

These cell types hold the most promise in the market for stationary energy supplies. Both types were studied at Siemens during a two-year trial period; although MCFC-based plants already deliver several hundred kilowatts worldwide. In 1990, Siemens decided to focus on development of the SOFC- in addition to its already advanced work in PEM fuel cells-because of its system technology and usability advantages.




 

Inglés Técnico -Nivel II (Civil)

Construction picks up, but incentives needed

Construction activity in Argentina fell a painful 6.5 percent in 1999, the Ministry of Economy reported on Tuesday. The decline in activity is the result of a bruising recession which saw industrial activity shrink 6.8 percent, and the GDP fall almost three percent.

Fortunately, there are signs that a recovery is on the way. Construction company executives certainly seem to think so. According to a survey of construction firms conducted by the Economic Institute of the Universidad Argentina de la Empresa (UADE) at the end of last year, 63 percent of those surveyed said they believed the sector will pick this year. Respondents attributed their optimism to cheaper supplies, easier access to credit and, perhaps most importantly, renewed expectations of economic growth. Those expectations are already beginning to be borne out. For example, in December industrial activity posted a 9.0 percent year-on-year growth compared to the same month in 1998. In spite of signs of improvement, however, some remain skeptical. Hugo Lehmann, Commercial Manager of Sika Argentina, a company specializing in the production of construction chemicals and materials, believes the optimism may be a little premature. A construction firm's prospects depend heavily on its area of specialization, he explained. "Companies dealing with public work projects, for example, would have a black view of the recent past." he said. In Lehmann's view, Argentina will need more than an economic recovery to bring about strong growth in the construction industry. To begin with, he stated, something must be done about expanding the availability of affordable credit. The construction industry, like the rest of national industry, and especially small and medium enterprises, is suffering from exorbitant interest rates, he said. He also suggested making $500 to $3000 credits available to property owners so they could maintain and repair their properties.

The advantage of this proposal, said Lehmann, is that it would both improve the nation's infrastructure, and potentially create jobs. "Argentina", he said, "is behind in cement per capita usage levels compared to Venezuela, Colombia, Brazil and Chile."

That ratio is often used to rate and rank both a country's quantity and quality of infrastructure and its level of development.

The Ministry of Infrastructure, Housing and Public Works estimates that significant improvements in infrastructure require investment equal to approximately two percent of gross domestic product (GDP).



Testing fire resistance of timber frame buildings (Civil)

A major project investigating and testing the fire resistance of medium-rise timber frame buildings is now under way at BRE (Building Research Engineering). The main aims are to show that the performance of a complete timber frame building subject to a real fire is at least equivalent to that obtained from standard fire tests on individual elements, and to demonstrate that this form of construction can meet the relevant functional fire performance requirements of the Building Regulations.

The project will evaluate levels of safety for residents of medium rise timber frame building given a particular fire scenario based on statistical data for fire loads in residential accommodation. Other objectives include:

  1. Validating the parametric approach adopted in Eurocode 1 with regard to fire severity and duration;
  2. Assessing the means of determining external flame spread;
  3. Allowing an assessment to be made of the likely spread of fire both across compartment boundaries and by upward flame spread through ventilation openings and cavities;
  4. Providing information to be used to verify analytical models used to predict thermal and structural behaviour at elevated temperatures;
  5. Measuring the extent of charring to elements of the structure;
  6. Providing information on the performance of connectors and connections subject to fire.

The intention is to carry out a full scale fire test in one of the four flats on a storey of the timber-frame building. The compartment will be fully fitted out, and furnished with a pre-determined value of fire load towards the upper end of the spectrum for residential accommodation in flats. Ignition will be from a single source, most probably in the living room/kitchen area. Timber cribs will be used for ignition in this location and flame spread will be completely uninhibited. The ventilation to the compartment will be arranged to represent a worst case scenario consistent with the form of construction and the regulatory requirements.




 

Inglés Técnico -Nivel II (Industrial)

Shift scheduling tips from manufacturing sector

Downsized utility plants and already-lean independent power producers’ (IPP) facilities must find more efficient ways to allocate personnel and resources. Often, that starts with a re-examination of shift schedules.

But a shift schedule is more than just a series of work hours or a pattern of days on/days off, warns Richardson Coleman, Coleman Consulting Group, Ross, Calif. A shift schedule, he says, should be thought of as a complete system for optimally deploying both capital and personnel. It must blend employee quality-of-life requests with health and safety requirements, but it must begin with business needs. Unfortunately, utilities often start with the traditional schedule and try to adapt their business around it, Coleman asserts.

Coleman is currently helping a major US utilities revise its shifts schedules. The utility owns more than 10 central fossil-fuel plants, and numerous smaller hydro and peaking units, which are operated and maintained by approximately 1000 employees. Most of the sites used shift schedules that were based on tradition or resulted from a long-standing contract negotiation. One plant had recently switched to the 12-hr shift schedule commonly applied by IPPs. None of these schedules, including the IPP-type, optimized capital and personnel, Coleman reports. He, like other specialists, contends that power producers can learn a great deal about shift schedules from the manufacturing industry.

Overtime vs. idle time. Like many companies, Coleman’s utility client spent many hours tracking and managing overtime. But he was able to change its focus to idle time. Overtime which results from under matching the work-load, has a relatively low ratio of pay-to-work output. In contrast, since the work output from one hour of idle time is zero, its pay-to-work output ratio is infinity!

 

Applicability of Simulation (Industrial)

Developing dynamic models that individually represent the concerns and important aspects of each participant group’s responsibilities, and, subsequently, linking those models in an integrated fashion, is an effective means to coordinate, monitor, document, and explore the consequences of the inter-relationships among the many functional aspects of the product realization process. The functional groups would be able to answer many of the questions that human coordinators (i.e. managers) would try to satisfy throughout the collaborative task.

Each group or individual creates activity and process simulation models to reflect the concerns, known interdependencies and critical elements in accomplishing their tasks. The models:

  1. Provide a means of formally representing the problems, tasks, and constraints that define the individual functional contributions to the collaborative effort of the team and to capture the decisions and the stream of reasoning from which subsequent decisions were derived.
  2. Support abstraction and explanatory mechanisms for the vat amounts of specialized data that each group is intimately concerned with but which the associated groups are only interested in from the standpoint of possible dependencies, interactions, and pathologies.

The purpose of each group representing their concerns as one or more models, instead of just making their data available to the others, is a reflection of the difficulty of making sense of the vast amounts of data involved and the lack of control over what each group is allowed, or should, have access.

This attitude towards exchange of information rather than data certainly does not mean to imply that the direct, face-to-face communication and association of the various groups can be completely replaced by technology, but rather illustrates how technology can be employed to capture, formalize and communicate the concerns and considerations of the participants.




 

Inglés Técnico -Nivel II (Textil)

WEAVING SPEED

Increased weaving speeds have placed a substantially greater demand on the quality of the sized warp yarn. Not only must the quality level itself (such as reduced yarn hairiness in the case of spun yarns) be improved: it must be accomplished on a more consistent basis to allow maximum performance by the weaving machine. This has led to greater demands to control all the parameters which affect the warp quality in a more precise and consistent manner. The application of improved control systems, often using computer technology, has been one significant technological change in sizing machine system design.

As one or two sizing machines may often be required to support a large number of weaving machines, the requirements for high performance and low maintenance have been increased.

This has led to the development of mechanical and electrical components with greater reliability and lower maintenance requirements. And there is no doubt about the main priority here. While speeds (production rates) are always important, consistent quality of the warp yarn is the paramount challenge faced by the sizing machine manufacturer.

Further advances in warp preparation and presentation is seen as a prerequisite to capitalizing on dramatic gains in the weaving process itself.

Just as most changes in warp preparation have been “of an evolutionary rather than revolutionary nature”. So this will continue with subtle but ongoing improvements in warping and sizing machine technology

 

Dyeing and Finishing (Textil)

Scoured wool may be dyed in the form of “tops” or dyed and finished after spinning and weaving.

The wastes produced from the dyeing and finishing processes are contributed by the spent liquors and subsequent washings after singeing, bleaching, dyeing, and finishing. It is usually impractical to separate the rinse or wash waters from the stronger wastes within the plant, and these are collected in a common drain for treatment. There are isolated cases where cleaner waters, such as cooling and condenser waters, can be re-used in other processes or discharged directly to waste. The quantity of wastes varies greatly, with a mean volume of about 40 gals per yard of piece goods, or about 10 gals per pound of goods, or about 10 gals per pound of “tops.”

These wastes also vary greatly in strength, not only on account of the quantity of process water used by the different plants, but also because of the various types of dyes and other chemicals used in the finishing processes. Typical analyses of weighted composite samples of waste from top dyeing and piece goods dyeing and finishing are given in Table 13.

Wastes from wool dyeing and finishing wastes have been treated by chemical precipitation with alum or iron salts, followed by sedimentation. The effluent produced is often satisfactory for direct discharge to water courses. Filtration of the effluent on trickling filters or sand filters can be carried out if greater purification is required.

The efficiency of treatment of a wool blanket mill wastes by coagulation and filtration is shown in Table 14. In this case the combined wastes are treated with ferric sulfate settled for about 8 hrs, and the chemical effluent applied to sand beds at a rate of 110,000 gad (gallons per acre per day). The sludge is drawn by gravity to sludge beds, and when dry, removed to waste land. The final effluent is consistently of high quantity and is discharged without objection into a small brook.

 


 

Inglés Técnico -Nivel II (Mecanica)

THE PRIVATE CAR

When the automobile was first conceived in the late nineteenth century it was imagined as a leisure vehicle.

Unthinkable to the early inventors would have been the mass proliferation of the automobile and its immense influence on modern society and its economy. The role of the private car began to change in the 1920s and in recent years the use of the automobile has grown most exponentially in developing nations, where the private car is quickly becoming a primary form of transportation and an integral aspect of their economies.

What should be done to ensure that we still have enough petroleum deposits, enough clean air, and enough room to maneuver our cars and prevent global warming in the coming century? One extreme and unfeasible suggestion is that we abolish the private car altogether and find other forms of transportation. However, the most immediate and realistic solution appears to star with a rethinking and redesign of the car itself. Make the private car better; make it a more intelligent and a more efficient machine.

The key to making automobiles more gas efficient is to make them lighter and provide them with a less wasteful engine. A large car is not always best suited for its purported function. Somewhere between only two and five percent of the energy used to power a large car is actually employed to move the passengers; the other ninety-five to ninety-eight percent is used to simply move the car itself. This can be remedied without much suffering on the behalf of the consumer.

Consequently, much off the current development focuses on new materials that can replace the steel parts, including strong composite plastics, aluminum, magnesium, ceramics, and even carbon composites. Ironically much of the initial research into use of these substances was pioneered in automotive racing where weight reduction is in direct proportion to higher speeds. These investigations into new materials and structures of course will eventually lead to new aesthetic solutions.

 

Hybrid Electric Vehicles (Mecánica)

Hybrid cars, which combine a fossil fuel combustion engine with an electric motor/battery, come in three types. The series type uses the combustion engine to drive a generator only. In the parallel type, both the combustion engine and the electric motor power the car. Finally, there is the dual type, which is a combination of the series and parallel types. In series type hybrid cars, driving performance is limited by the output of the electric motor. On the other hand, since the gas engine is only used under certain conditions, emission levels are very low. Parallel type hybrids have a large output, but they tend to use their engines more. Meanwhile, the dual types exhibit the characteristics of both types, but are mechanically complex.

These hybrid vehicles use their combustion engines under high load conditions when engines operate at high efficiencies and rely on their electric motors under low load conditions when motors are highly efficient. This set- up brings huge improvements in gas mileage. The battery in hybrid vehicles does not need to be charged from an external electricity source, so these cars can travel much longer distances than electric cars. Finally, the conditions under which the gas-powered engine is used are limited, so the cars can easily meet emission regulations. Furthermore, technology to scrub exhaust can be utilized when the engine is in use. On top of all this, regenerative braking increases efficiency even further. Because of these features, the predominant view is that hybrid vehicles have a better chance of catching on in Japan than electric vehicles do.

In 1997, the Toyota Prius became the world’s first mass-produced hybrid vehicle. Under Japan’s 10-15 mode test, this dual type hybrid rated a gas mileage of 31 kilometers/liter, which is twice that of equivalent gas-powered vehicles.

Both Honda and Nissan followed Toyota by developing parallel-type hybrid passenger vehicles under the names Insight and Tino-hybrid, respectively. Since then, many hybrid vehicles have entered the car market. All the hybrids use a state-of- the- art battery that keeps energy density low while boosting power density in keeping with the demand for power.




 

Inglés Técnico -Nivel II (Quimica)

MANUFACTURE OF OLEORESINOUS VARNISHES

Considerable skill is required in the manufacture of an oleoresinous varnish. The resins and the drying oils, suitably pre-heated, are brought together under heat and all combine to interact until a compatible, clear and properly-bodied mixture is obtained. Time and temperature must be carefully watched. The various reactions that occur, such as the depolymerization of cross-polymers between the two, are exceedingly complex. After this high temperature reaction, the necessary metallic driers and the thinners are added. The cooled varnish is then clarified in centrifuges and stored in large tanks for the purpose of blending successive batches and further slow clarification. In these oleoresinous varnishes, the ratio of drying oil to resin is of prime importance. In general, with the natural and many of the synthetic resins, the higher the oil content, the more flexible and durable the varnish. Conversely, air-drying speed, initial hardness and brittleness increase with resin content. The usual method of stating this oil-resin ratio is by oil-length. The synthetic resin industry has made available various stock solutions having the properties of different oleo-resinous varnishes. By suitable selection and blending of these stock solutions, vehicles of the desired characteristics can be formulated without the necessity of breaking. Enamels are produced by the usual dispersion of the pigments in these vehicles, adding the necessary driers in solution form. By the way of further assistance, pre-ground pigment pastes are commercially available, which are compatible with these vehicles, thereby simplifying the incorporation of pigments to make the desired enamels.

 

Superglass (Química)

Investigators at the University of Rochester have found a way to make the glass employed in many commercial lasers stronger and more resistant to cracks than the glass now in use. Consequently lasers can be fired at higher repetition rates and run at a significantly (up to six times) greater power than is now possible. Because existing glass, which serves as a matrix for neodymium and other compounds that lase, can easily be replaced with the new glass, it should be possible to quickly improve the quality of existing lasers having applications in such diverse fields as industrial machining ocular surgery and microchip fabrication.

The collaborators immersed samples of the glass in a bath of molten salt containing atoms of sodium and potassium. Even though the two elements are much larger in size than lithium, they have similar chemical properties. As a result, when a slab is left to soak in the bath for several days, atoms of lithium diffuse from the surface of the glass and are replaced by sodium and potassium. When they are squeezed into the small “holes” left by lithium, the larger atoms create a compressive layer of stress around the slab. The layer is typically only 60 micrometers (thousandths of a millimeter) thick, but the compression it generates is sufficient to hold the slab together and to prevent cracks from traveling through it. Initial tests of the strengthened glass have been successful, and the development and marketing of a commercial product are now being explored.




 

Inglés Técnico -Nivel II (Naval)

Supercavitation Fundamentals

PROPELLING A BODY through water takes considerable effort, as every swimmer knows. Speeding up the pace makes the task even harder because skin friction rises with increased velocity. Swimming laps entirely underwater is even more difficult, as water produces 1,000 times more drag resistance than air does.

Naval architects and marine engineers vie constantly with these age-old problems when they streamline the shapes of their hull designs to minimize the frictional drag of water and fit their ships with powerful engines to drive them through the waves. It can come as a shock, therefore, to find out that scientists and engineers have come up with a new way to overcome viscous drag resistance and to move through water at high velocities. In general, the idea is to minimize the amount of wetted surface on the body by enclosing it in a low-density gas bubble.

When a fluid moves rapidly around a body, the pressure in the flow drops, particularly at trailing edges of the body, explains Marshall P. Tulin, director of the Ocean Engineering Laboratory at the University of California at Santa Barbara and a pioneer in the theory of supercavitating flows. As velocity increases, a point is reached at which the pressure in the flow equals the vapor pressure of water, whereupon the fluid undergoes a phase change and becomes a gas: water vapor. In other words, with insufficient pressure to hold them together, the liquid water molecules dissociate into a gas.

Under certain circumstances, especially at sharp edges, the flow can include attached cavities of approximately constant pressure filled with water vapor and air trailing behind. This is what we call natural cavitation, Tulin says.



The Virtues of SafeHull (Naval)

The SafeHull System is an innovative dynamic-based method for the design and evaluation of hull structures developed by ABS. In essence, the virtue of SafeHull is that it can lead to safer, more durable ships through the identification of critical areas in the hull structure. For new designs, this means material can be placed where it is most needed; for existing vessels it makes possible closer scrutiny of critical areas during survey and more effective planning of maintenance schedules.

In applying SafeHull to new designs, the loads and the resulting stresses and displacements imposed on the hull structure can be quantified in an integrated and realistic manner. SafeHull provides an innovative flexible approach that explicitly considers the structure’s sensitivity to corrosion as well as the dominant failure modes – yielding, buckling and fatigue. The major benefits derived from applying SafeHull to new vessel designs are:

  1. Reduced risk of structural failure
  2. Safer, longer-lived tanker, bulk carrier and containership structures
  3. Lower life-cycle maintenance and repair costs
  4. More effective use of steel for long-term benefit
  5. A more streamlined ABS review process
  6. A more rapid means for exploring innovative designs while maintaining safety and efficiency

Through ABS SafeHull Condition Assessment Services vital information on existing vessels can be generated leading to the major advantages of:

  1. Effective determination of required steel replacements through dynamic-based structural evaluation
  2. Additional protection against structural failures thereby providing added protection to life, property and the environment
  3. Demonstration of due diligence
  4. Lower life-cycle maintenance and repair costs through more effectively planned surveys
  5. Potentially higher resale value through technological evaluation of hull integrity.

The ABS Safehull System for new tankers, bulk carriers, and containerships is a complete technical resource comprising two guides – one for dynamic based design and evaluation of structures and the other for fatigue assessment – as well as a comprehensive suite of software applications programs, technical support services, and related technical documentation and guidance.

ABS= AMERICAN BUREAU OF SHIPPING




Ultima actualización:
23/04/2009