Monday, September 30, 2019

Hierarchical Team

A hierarchical team is a type of team organization structure in which the team is divided into hierarchies and there are many middle management (Mohr 1982). There is an overall manager of the team who is place at the top of the hierarchy. This manager is responsible for leading or controlling the managers in each hierarchy to make sure that the team’s objectives are met as well as the overall objectives of the organization. The manager works with the middle managers to make sure that the team is ran smoothly and that the organizational goals are being achieved by the team. There is a manager in each hierarchy who is called a middle manager who is in charge of their own department. This helps the department to be independent and therefore is able to deal with their own problems in their department without bothering what the other departments are doing. The manager who is in charge of this department is leads the department towards achieving team goals (Heckscher and Donnellon 1994). The manager is responsible of the whole department at large and in making sure that the team’s objectives are being achieved as well as the overall objectives of the organization. The middle managers communicate their progress to the overall team manager who is able to evaluate performance and decide if the team is achieving the intended objectives as well as the overall objectives of the organization. Because in hierarchical teams the team is divided into hierarchies (Robbins and Judge 2007), the team is able to run smoothly because each department has its own jobs which are allocated to it and the department are able to undertake their jobs independently. This helps in making sure that the department or rather the hierarchy undertakes their work perfectly as they are responsible for themselves and can not blame any other department or hierarchy in case they do not deliver. This makes each department to work hard to achieve their target so as not to be blamed in case the team fails to achieve the intended objectives as well as organizational objectives. The team members are able to interact as they work and therefore (Thareja 2007), are able to combine their skills and achieve the organizational goals easily. The structure below shows how a hierarchical team looks like. A team in an organization is very important as it helps the people in an organization to work as team in order to achieve organizational goals. People are also able to work in a friendly environment and therefore, they are able to create a good working environment as well as a more relaxed environment. When people work when they are relaxed, they are able to deliver better than those people who work under any tension (Thareja 2007). Therefore, people in the hierarchical team are able to deliver well than those people that are working individually because; the people in the hierarchical team are able to interact as they perform their duties. Advantages of hierarchical team 1.Division of work- each hierarchy within the hierarchical team is allocated certain jobs to undertake, therefore, they are able to undertake the tasks (Lim and Sambrook 2010) because that is what they are good at and therefore are able to deliver the best. 2.Flexibility- within a team, there are people who are able to perform more than one tasks. Therefore, in case someone is absent in a department, another person is able to perform their duties and therefore, work is able to continue smoothly without much problem. 3.The hierarchies within the team are able to share ideas. This will help in making sure that the organizational goals are met. 4.Because the departments are independent, they are able to make their own decisions which is best for them and which will help them achieve the team’s goals as well as organizational goals. 5.Even though the whole hierarchy is a team, each department is allocated their particular job, makes the department to feel the ownership of a success (Pugh 1990). For example, if a department in a team is allocated the work of advertisement, the hierarchy will feel good if there is an increased sale. 6.Working as a team even though in a hierarchy boosts the morale of the workers. This is because, they are able to interact as they work, and the work is shared amongst the hierarchies. 7.Because the team is organized in an hierarchical manner, leadership is shared in that, a middle manager is allocated for each department (Thareja 2007), this helps in smooth running of the team and that, not just one person who is responsible for the team but several people according to the number of hierarchies in that team. 8.The team is able to deliver better products because they are comprised of a mixture of people with different talents and therefore, these talents can be used to achieve organizational goals. Disadvantages 1.Work can be divided unfairly amongst the departments. The work that the whole team is supposed to undertake may be divided unfairly amongst the departments or the hierarchies (Amaral and Uzzi 2007). This therefore means that some hierarchies or departments may be allocated more jobs than the others thus making them work more than the other departments. 2.Arguments amongst the departments may rise. In case the team’s objectives are not met, the departments may start blaming one another and thus result to arguments and even destroyed relationships in the organization. 3.Because each department is able to make their own decision, coordination of that team may be difficult as the departments work independently. 4.Because the hierarchical team focus more on working as a team rather than an individual, some talents and skills of some workers may become stagnant because they may not be needed or used in the team (Burns and Stalker 1961). 5.Poor communication because, communication is vertical in that, the top manager in the hierarchy has to communicate through the hierarchies while the manager at the bottom of the hierarchy has to reach the top manager vertically. 6.If one department fails to perform its tasks, it may lead to the failure of the whole team in that the whole team may not be able to achieve its goals.

Sunday, September 29, 2019

Marvell vs Herrick

Youth comes around once in a lifetime and it’s not something you can save for later. â€Å"To His Coy Mistress† by Andrew Marvell and â€Å"To the Virgins, To Make Much of Time† by Robert Herrick portray the underlying theme of carpe diem or â€Å"seize the day,† enjoying life to the fullest. Both of these poems mainly try to pursue women who have grand beauty to realize the advantage of their good looks when young, before time takes a toll on their beauty.Both poets use their words to convince someone to act, in this case to savor youth, virginity and beauty; they are trying to convince young virgins to live life to the fullest potential. Marvell and Herrick poems share the same theme and central belief but have different audience and use different ways to express their ideas. Both poems use carpe diem as their major theme. Herrick's poem portrays carpe diem by citing the shortness of life and persuading young women to marry and enjoy life taking advantag e before death takes its turn.He says â€Å"gather ye rosebuds while ye may, old time is still a-flying†, which shows that the virgins in this case referred to as rosebuds are just beginning to live and don’t have any experience yet, but time flies and one ages fast by so it’s better to enjoy the good years while there is time (Herrick 1-2). Carpe diem is used from the beginning In Marvell’s poem, â€Å"Had we but world enough, and time this coyness, lady, were no crime† saying that even though he wants all the time in the world to spend with her, there isn't enough so she is committing a crime by making him wait for her virginity (Marvell 1-2).By stating to live life to its fullest potential he wants to persuade his mistress to a sexual relationship. â€Å"To The Virgins, to Make Much of Time† uses the meaning of carpe diem by encouraging young women to make use of their time by finding love while young and getting married before they get o ld and lose their beauty. Marvell and Herrick encourage young women to seize the day and don’t pass up chances since opportunities are hard to find. Marvell and Herrick’s poems share a central belief that young virgins should not wait to have sex because nobody knows what the future holds.Both poets want to idealize that tomorrow may never come, so it’s better to do it now and not wait because of coyness. They use death and getting old as the excuse to not lose time and make use of virginity when young. Marvell tries to lure a woman into sleeping with him by using time as a defense to experience pleasure now, he tells her that time is running out and â€Å"Now let us sports us while we may, and now, like amorous birds of prey† making use of their strength and youth to consummate their love (Marvell 37-38). He tries to convince the mistress that it is better to have sex now than to save her virginity for the future.Herrick recommends to all virgins to make use of the youth and to find love and enjoy life’s pleasures because old age is near. He emphasizes to not waste time as he feels women are their best at their prime, when they are young and untainted saying â€Å"then be no coy, but use your time, and, while ye may, go marry†(Herrick 13-14). The idea in both poems is to take advantage of being young and beautiful because times flies and people get old sooner than later. Marvell and Herrick dedicate the poems to a different audience. Marvell is writing specially to his mistress trying to woo her with promises of everlasting love.Herrick however, dedicates his poem to young virgins and wants to give them the idea of marriage while love and flesh is young to not have to suffer in the later years of life and not be lonely. In the beginning of To His Coy Mistress†, Marvell praises his woman writing how her modesty wouldn’t affect them if time was not an issue, but it is. He states she is a virgin because she is coy and later begins to diminish her ideals and beauty with aging and death saying â€Å"then worms shall try that long-preserved virginity, and your quaint honor turn to dust,† to state there is no reason for her to keep her virginity till the grave (Marvell 27-29) .Everything in Marvell’s poem is about his wishes to enjoy sexual pleasure with this woman and does everything in his power to scare her of dying without having sex first. Herrick’s poem is about the urgency and duty for the virgins to go forth and marry while young and beautiful before everything is loss with time and old age, warning them of the sufferings that come if they fail to listen to his advice. Marvell and Herrick use different ways to express their ideas on the poems.In â€Å"To the Virgins, To Make Much of Time,† Herrick uses a rather short poem to make his point short and simple versus the long and descriptive â€Å"To His Coy Mistress† by Marvell. Herrick focuses in an optimistic look to take advantage of youth and has basic and warmth imagery to state that beauty fades over the years and the effects of wasting time. On the other hand, Marvell’s poem is more detailed, beautiful and at the same time dark to suggest the mistress she shouldn't waste her youth and virginity while she is at the prime of her life.He uses ugly and realistic ideas to snap the mistress out the notion of eternal love to finally lure her to make love with him and make time the last thing on their minds. Marvell is more in-depth and emotional while Herrick is calm and regretful. Both poems compare to each other by using the underlying theme of carpe diem, making the most of each moment before old age and beauty disappears. Marvell is very emotional and persuasive while Herrick is less personal giving useful advice to young people. To His Coy Mistress† is an expression of Marvell ‘s most deeply rooted impulses, how he feels about the ideas the lady has abou t losing her virginity, and the fact he wants to spend time loving her and adoring her in bed. â€Å"To The Virgins, to Make Much of Time† is a poem about the wishes of Herrick for the youth to realize that now it’s their time and to not waste any amount because of coyness, addressing his thoughts to the young generation to have a fulfilled life, to not be shy of trying new things as those who are not afraid are the ones who will enjoy the most.Works Cited Marvell, Andrew. â€Å"To his coy mistress. † The Seagull Reader Poems. Ed. Joseph Kelley. W. W. Norton & Company, Inc. , 2008. 220-222. Print. Herrick, Robert. â€Å"To the Virgins, to Make Much of Time. † The Seagull Reader Poems. Ed. Joseph Kelley. W. W. Norton & Company, Inc. , 2008. 159-160. Print.

Saturday, September 28, 2019

Create-A-Greeting-Card College Scholarship Contest

With college costs on the rise, more and more families are looking for ways to bridge the gap between what they can afford to pay and the price of tuition and fees. Fortunately, a number of independent scholarships exist to help students attend their dream colleges. While securing these scholarships isn’t as simple as filling out the FAFSA form, students who are willing to put in the time and effort can often secure significant amounts of money toward financing their college education. Different scholarship opportunities exist to recognize students with various talents, skills, and interests. For high school students who are artistically minded, it might be worth considering the Gallery Collection’s Create-A-Greeting-Card College Scholarship Contest. The winner of this award receives a $10,000 college scholarship, with an additional $1,000 award going to the student’s high school. The Create-A-Greeting-Card contest invites applicants to use their innate talent and creativity to design a holiday, get well, or birthday greeting card. To apply, simply submit a work of art, computer graphic, or photo intended for the front side of a greeting card. Each image must be submitted in JPEG format and be two megabytes or fewer in size. Entries will be judged based on the following factors: Candidates can view current designs online at www.gallerycollection.com . To apply for this scholarship, you must be a high school, college, or university student who is currently enrolled in school. All applicants must be U.S. citizens. If you have additional questions or concerns about the requirements, feel free to contact the Gallery Collection online at scholarshipadmin@gallerycollection.com . Estimating your chance of getting into a college is not easy in today’s competitive environment. Thankfully, with our state-of-the-art software and data, we can analyze your academic and extracurricular profile and estimate your chances. Our profile analysis tool can also help you identify the improvement you need to make to enter your dream school. It’s not enough simply to hope you get a scholarship for college. Savvy students seek out a variety of contests and programs offering scholarships to reduce the cost of their degrees. Fortunately, you don’t have to look far to find lucrative opportunities. Not only do a number of high schools boast scholarship programs targeted toward college-bound students, but many community organizations offer contests as well. You can discover scholarship programs through your town or city, church, Lions club, or even businesses in your community. When deciding which scholarships to target, students often ignore contests with smaller awards. While you might be hesitant to invest time and energy into applying for a $500 scholarship, the truth is that these awards can add up quickly. After all, winning five $500 scholarships is the same as securing one worth $2500. Moreover, the number of applicants might be lower because of the smaller potential payout. So your odds of winning might be higher with these contests. Similarly, scholarship contests that require a lot of work tend to draw a smaller applicant pool. After all, busy high school seniors are often hesitant to invest their time in writing lengthy essays or shooting video submissions. According to one Money.com article , scholarships requiring 1000-word essays tend to receive fewer than 500 submissions. So don’t be afraid to put in a little extra effort to score some additional funding. Unlike student loans, scholarships represent financial awards that don’t have to be paid back. So it’s only logical to apply to a wide range of contests and opportunities. At , we created our Applications Program to help students gain admission to their dream schools and find the financing they need to make their dreams a reality. From creating a custom roadmap for the applications process to filling out FAFSA and scholarship forms, we help students best the competition. To learn more about our services, call today or contact our team online.

Friday, September 27, 2019

Managing the Personal Selling Function Assignment

Managing the Personal Selling Function - Assignment Example Marketing communication is a fundamental tool in any company that needs to satisfy their customer’s needs. It includes, advertising, packaging, personal selling and many more. In addition, marketing communication outlines the marketing communication concepts that include the positioning of a brand, the marketing message and how a company wants the consumers to view their brand. Â  Personal selling is a promotional method where a salesperson uses skills and techniques to build relationships with potential clients. This promotional method occurs through face-to-face meeting or through communication via the telephone that allows information to be conveyed. Personal selling involves finding new prospects where a sales person makes direct contact. A sales person needs to prepare adequately to meet a prospect buyer. In addition, they need to present their product in a manner that shows that they best understand their products. This is because, the clients might ask questions about the products, and the sales person should be able to address the clients concerns. Sales persons should be able to deal with obstacles put in their way by their clients. It is important to close a sale and this depends on the knowledge and skills of the sales person on how to close a deal. The sale persons should be strategic in convincing customers to buy a product (Hutt & Speh, 2007). Â  The sales management function is to facilitate activities that are involved in the movement of goods from the supplier to the customer.

Thursday, September 26, 2019

X-ray Fluorescence Essay Example | Topics and Well Written Essays - 1500 words

X-ray Fluorescence - Essay Example J. Moseley number elements in 1913 through the observation of K-line transitions as observed in X-ray spectrum. This formed the basis of element identification through X-ray fluorescence spectroscopy by considering the relationship between the atomic number and the frequency. X-Ray fluorescence, XRF refers to the emission of characteristic secondary, also referred to as fluorescent X-rays by bombarding a material with X-rays at high energy or gamma rays so that the material gets excited. The wavelength of X-rays range between 50 and 100 A related to energy in the relationship: E = h? where h is Planck constant, 6.62 * 10-24 and ? is the frequency in Hertz. High energy X-rays would be required for XRF as the soft X-rays get absorbed by the target element, with the absorption edges depending on ionisation energies of the respective electrons, unique to each element. While the energy dispersive XRF, EDXRF methodology detects all elements from Na through to U, the wavelength dispersive X RF, WDXRF detects down to Be (Shackley 34). How XRF Works When the atoms of the target material absorb the high energy photons from the X-rays or gamma-rays, the electrons at the inner shell would be ejected from the atom transforming them to photoelectrons. As a result, the atom would be left at an excited state having a vacancy in its inner shell. The outer shell electrons would then fall into this resultant vacancy in the process emitting photons whose energy equals the difference in energy between the two states. It would be appreciated that each element has its unique energy level set, implying that each element would emit characteristic pattern of X-rays unique to itself which Sharma (527) refers to as characteristic X-rays. With increase in the concentration of the corresponding element, there would also be an increase in the X-ray intensity. This phenomenon also applies in the quantitative analysis of elements through the production of optical emission spectra. With characte ristic X-rays resulting from transition between the energy levels in an atom, the electrons that transition from energy level Ei to Ej would emit X-rays with energy Ex = Ei – Ej. With each element having unique atomic energy level set, a unique X-rays set would be emitted characteristic of the element (Sharma 526). Considering Bohr’s atomic model (see fig. 1), with atomic levels designated as K, L, M and so forth, each with additional sub-shells, a transition between these shells would result in the emission of characteristic X-rays. Fig. 1. Bohr’s atomic model from Sharma (527) As such, M X-ray would result from transition to M shell, so would K X-ray be a result of transition to K shell. K?1 X-ray would result from an electron dropping from M3 shell to fill in a vacancy in the K shell (see fig. 2). The emitted X-ray would have energy EX-ray = EK – EM3. Figure 2: X-ray line labelling from Bounakhla and Tahri (12) Sources According to Bounakhla and Tahri (21), radioisotopes provide the simplest source for configuration since one selects a source that emits X-rays slightly above the target element’s absorption edge energy. They have found wide application due to their stability and smallness in size in the context where monochromatic and continuous sources would be required. It serves well with regard to ruggedness, reliability, simplicity and in the consideration of cost of equipment. For safety, emissions would be limited to approximately 107 photons. The activity would be described in terms of disintegration rates of the radioisotopes where this activity would decrease from initial activity, A0 to final activity At for a duration of time, t. At = A0e(-0.693t/T?) where T? is the

Where Human Life First Begins Coursework Example | Topics and Well Written Essays - 1000 words

Where Human Life First Begins - Coursework Example Mr. Will is the main root cause of the death of both Asha and her baby. The law must regard him to this manslaughter and convict him. It is important to note that murder is not necessarily executed using a weapon. It is also psychological caused death. Mr. Will is the father of the unborn baby never appreciated and loved the pregnancy. His hate began the moment Asha told him she was pregnant. This ended their happy five-year stay since Mr. Will was unhappy. The situation even moved from worse to worst when his attempts to persuade Asha to abort went in vain. This shows that Mr. Will had planned to kill the baby earlier through abortion. He had knowledge that abortion is illegal since it kills life and risky to the mother too. The intentions of Mr. Will to force abortion could also have killed Asha and Will knew that after the abortion the baby could die.Moreover, the gestation period of the fetus was cut short due to the violence. When Will hit Asha and she fell down stairs that are when Will actually killed the baby. According to the scientist, domestic violence is noted to be one of the modifiable risk factors that mainly result in adverse pregnancy outcomes (CDC 2013). In the world today, approximately 26.7% of pregnant women are physically abused during pregnancy (CDC 2013). The physical abuses include being beaten up, threatened with a weapon or verbally or even being thrown away. Out of the numbers, 10.9% of all those physical violence have ended up in premature births (CDC 2013).

Wednesday, September 25, 2019

Policy Development Paper Essay Example | Topics and Well Written Essays - 2750 words

Policy Development Paper - Essay Example As the essay discusses innocent people are given incentives and they fall in traps of agents who smuggle them illegally in countries and then exploit them. Poverty in developing countries is one of the reasons why human trafficking is increasing every day. Strict policies and law enforcement is required so that this abuse of human life could be stopped. This paper declares that human trafficking is defined as transportation, recruitment or harboring of a person, by deception, fraud, force or by giving incentive to a person, for the purpose of exploitation. Human trafficking is very common is developing countries where people are giving incentives of employment and are transported legally or illegally into developed countries where they are exploited. The impact of human trafficking on human society is disastrous. People who are transported in the country are kept in bondage labor. They are not allowed to use their passports so they cannot leave the country. We can say it is a form of modern slavery. These people are deprived of their civil rights and forced to undesirable activities. Women are mainly the victims of human trafficking and they are usually used for prostitution. Prostitution is illegal in most countries so girls from third world countries are brought to the developed world in hope of employment and later they are force fully used for prostitution purposes. These girls are not allowed to leave the country and criminal syndicates earn money from the work of these girls. A number of diseases can also spread in the country due to prostitution of trafficked women. In United States people have a fair bit of knowledge about sexual diseases but these girls come from backward societies so they have little or no idea of sexual diseases and their prevention. This makes these girls more prone to sexual diseases then other girls in the same business. Also they cannot go to hospitals because of their lack of identity so they are forced to live with it. These girls can transfer these diseases to their customers and in this way sexual diseases spread through prostitution. Human trafficking is harmful for society because trafficked people are used to fuel the illicit activities of the criminal groups. These people usually have no record of their existence so under coercion they perform activities like drug smuggli ng and prostitution. These crimes are fueled by people who are brought in the country by human trafficking. These people are easy targets because they are threaten and in turn they do anything their ‘masters’ want them to do. Bondage labor is another way in which human trafficking victims are exploited. They are used for producing a variety of products in factories or are made to work in mines (Penketh, A. 2006). Victims of human trafficking are very good workers because they are very cheap and they can even perform hazardous jobs. Also there is no need for insurance benefits or other benefits for these workers. The products produced by these victims are sold at low prices and competitive advantage is achieved (Penketh, A. 2006). Human trafficking also impacts human society by reducing employment opportunities for people of the country. Trafficked individuals are cheap labor so they are preferred by industries where manual labor is

Tuesday, September 24, 2019

What are the factors that may contribute our brain's efficeency and Essay

What are the factors that may contribute our brain's efficeency and accuracy when performing a task - Essay Example Research studies show that there are numerous ways in which one can improve his or her brain’s efficiency and accuracy while performing a task. According Richard Restak, brain’s efficiency and accuracy are significantly improved by performing one task at a time, rather than multi-tasking. This is because human brains have some limitations that man must accept. Multi-tasking makes the overall performance of the brain slower, or less efficient than it would be if an individual is performing one task at a time. This argument is based on neuroscientific evidence and experiments that have been previously conducted among them being David Meyer’s research study. Meyer found that multitasking affects both the efficiency and the accuracy of the brains. He stated that â€Å"not only the speed of performance, the accuracy of performance, but what I call the fluency of performance, the gracefulness of their performance, was negatively influenced by the overload of multitasking.† However, this is against the popular misconception that multitasking keeps one head above the rising flood of daily demands that many people believe in. Performing one task at a time improves brain’s efficiency and accuracy because human brains have short-term memories that store between five and nine items at a time. Attempts to achieve more than one task that require both attention and consideration leads to slowing down of the brain’s efficiency and accuracy. Human brains cannot take in and process more than one streams of information, and effectively encode it to produce a short-term memory. If the information taken in does not make it into the short memory, it means that it cannot be transferred to the long-term memory for recall later. However, multitasking may not affect brain’s efficiency and accuracy to high extents especially if the tasks being

Monday, September 23, 2019

International Relations Theories and Business Essay

International Relations Theories and Business - Essay Example Two positivist schools of thought are prevalent: the realism and liberalism theories. This article will discuss the realism and liberalism theories as well as the popularly-known theory on Marxism, functionalism and constructivism, and their impacts on small businesses. The realism theory has several assumptions. Among these is the unitary form of nation-states with geographically-based actors having no authority above capable of regulating relations between states, hence, the assumption regarding the non-existence of a true world government. Further, the theory assumes that the sovereign states are the principal actors in international relations rather than non-government organizations. This leads to a situation where states compete with one another. The state also acts as the main actor for its own security and economic interests with a view that international corporations have little long-term independent influence. Moreover, there is a general distrust for long-term cooperation and alliance. In sum, the realist view that man is self-centered and competitive.2 The effect which this theory has on small businesses may be discussed in the micro and macro level. In the micro level, small businesses view themselves as competitive and self-centered. Since customer satisfaction is the main key for the survival of a business, enterprises initiate methods which make their products and services competitive in the market place such as product development and innovation. They also specialize so that consumers will go on their specialized products when they need them. Competition has its main advantage in product development and more efficient services, which mainly include quality, usefulness and beauty of the product or service. However, failing to diversify means more problems on a small enterprise who fails to cope with stiff competition in the market place. This results in smaller profits and huge expenses in the enterprise. In the macro level, the country develops its own protective policies which restrict the entrance of international markets and foreign investments, except when needed. As a result, small businesses may be provided with protection against foreign competitors that seek to attract the local market given its new technology and creative workforce. On the other hand, since the government is the main actor for its economy, foreign competitors may be replaced with government-owned and controlled corporations in competition with small enterprises. The Liberalism Theory on Small Businesses Liberalism values state preferences more than state capabilities as the determinant of state behavior. Plurality is the key term in defining state actions. There will be varying preferences from state to state and international relations are not only based on political matters but also on economic matters whether through international organizations, commercial firms, partnerships or individuals. The concept of liberalism flows in international cooperation and wider notions of power.3 The impact of liberalist policies on small enterprises is material. As a result of government's liberal concept, less restrictive economic policies are being initiated that invite foreign competition. The local small business enterprises may have difficulty in coping with foreign competitio

Saturday, September 21, 2019

How Accurate Is Eyewitness Testimony Essay Example for Free

How Accurate Is Eyewitness Testimony Essay The bedrock of the American judicial process is the honesty of witnesses in trial. Eyewitness testimony can make a deep impression on a jury, which is often exclusively assigned the role of sorting out credibility issues and making judgments about the truth of witness statements. In the U. S. , there is the possibility of over 5,000 wrongful convictions each year because of mistaken eyewitness identifications. The continuous flow of media stories that tell of innocent people being incarcerated should serve as a signal to us that the human identification process is rife with a large number of error risks. These risks have been largely supported by research. Unfortunately, a jury rarely hears of the risks; therefore, eyewitness testimony remains a much-used and much-trusted process by those who are uninformed many times, lawfully uninformed. In cases in which eyewitness testimony is used, more often than not, an expert will not be allowed to testify to the faults of eyewitness identification. Thus, the uninformed stay blissfully ignorant of the inherent risks involved in eyewitness identification testimony. Too often, these blissfully ignorant people make up a jury of our peers. (McAtlin, 1999). According to McAtlin, there are three parts of an eyewitness testimony: (1) Witnessing a crime – as a victim or a bystander – involves watching the event while it is happening. (2) The witness must memorize the details of the occurrence. (3) The witness must be able to accurately recall and communicate what he or she saw. Studies of wrongful conviction cases have concluded that erroneous eyewitness identifications are by far the leading cause of convicting the innocent. Several studies have been conducted on human memory and on subjects’ propensity to remember erroneously events and details that did not occur. When human beings try to acquire, retain and retrieve information with any clarity, suppositional influences and common human failures profoundly limit them. The law can regulate some of these human limitations others are unavoidable. The unavoidable ones can make eyewitness testimony devastating in the courtroom and can lead to wrongful convictions. Unfortunately, memories are not indelibly stamped onto a brain video cassette tape. An event stored in the human memory undergoes constant change. Some details may be altered when new or different information about the event is added to the existing memory. Some details are simply forgotten and normal memory loss occurs continually. Even so, witnesses often become more confident in the correctness of their memories over time. The original memory has faded and has been replaced with new information. This new information has replaced the original memory because the natural process of memory deterioration has persisted. Furthermore, individual eyewitnesses vary widely in infallibility and reasoning. . (McAtlin, 1999). Studies of wrongful conviction cases have concluded that erroneous eyewitness identifications are by far the leading cause of convicting the innocent. For example, the Innocence Project of Cardozo School of Law reports that of the first 130 exonerations, 101 (or 77. 8 percent) involved mistaken identifications. But exactly how often eyewitnesses make tragic mistakes that lead to the punishment of innocent persons is unknown and probably unknowable. One of the infamous cases where mistaken identity led to the wrongful conviction and execution was Gary Graham. Grahams case received widespread attention, in part because of substantial evidence indicating that he was innocent of the murder charge, and the indisputable fact that his court-appointed trial lawyer failed to mount a serious legal defense. Graham was convicted of killing grocery store clerk Bobby Lambert on May 13, 1981 during a robbery attempt. Graham was 17 years old at the time. There was no physical evidence linking him to the crime and only one eyewitness who identified him as the murderer. Eyewitnesses who told police investigators Graham was not the killer were never called to testify at trial by Grahams lawyer. Constitutional Protections In Neil v. Biggers, the U. S. Supreme Court established criteria that jurors may use to evaluate the reliability of eyewitness identifications. The Biggers Court enumerated several factors to determine if a suggestive identification is reliable: (1) the witness’s opportunity to view the suspect; (2) the witness’s degree of attention; (3) the accuracy of description; (4) the witness’s level of certainty; and (5) the time between incident and confrontation, i. . , identification. Courts today continue to allow into evidence suggestive identification testimony. Currently, courts consider the admissibility of identification testimony under a Fourteenth Amendment procedural due process analysis. If a court determines that a pretrial identification was unnecessarily suggestive, it then ascertains whether the suggestive procedure gave rise to a substantial likelihood of irreparable misidentification. A court will find a substantial likelihood of irreparable misidentification only if the identification is found to be unreliable. Therefore, even if the court concludes that a police identification procedure was suggestive, it may be admissible if the court finds that the identification is nevertheless likely to be accurate. A court will balance the suggestiveness of the identification procedure against the likelihood that the identification is correct, resulting in an unprincipled rule of law that turns on the court’s subjective assessment of the defendant’s guilt. Issues That Impact an Individuals Testimony A specific look at how memory functions and how suggestion operates llustrates why participation in unregulated lineups creates unreasonable risks of misidentification. Identification procedures differ from other police investigatory procedures in that they solely rely on human memory. Human memory consists of three basic systems: (1) encoding, (2) storage, and (3) retrieval. â€Å"Encoding† is the initial processing of an event that results in a memory. â€Å"Storage† is the re tention of the encoded information. â€Å"Retrieval† is the recovery of the stored information. Errors can occur at each step. Contrary to common understanding of memory, not everything that registers in the central nervous system is permanently stored in the mind and particular details become increasingly inaccessible over time. According to Loftus and Ketchum, â€Å"Truth and reality, when seen through the filters of our memories, are not objective facts but subjective, interpretive realities. † Because these processes are unconscious, individuals generally perceive their memories as completely accurate and their reporting of what they remember as entirely truthful, no matter how distorted or inaccurate they, in fact, may be. An individual’s memories become distorted even in the absence of external suggestion or internal personal distress. Naturally, people tailor their telling of events to the listener and the context. (Loftus Ketchum 1991). Many conditions such as fear, lighting, distance from the event, surprise, and personal biases all affect memory and recall. Human memory is indeed delicate, especially regarding victims and witnesses of crimes. Fear and traumatic events may impair the initial acquisition of the memory itself. At the time of an identification, the witness is often in a distressed emotional state. Many victims and witnesses experience substantial shock because of their traumatic experiences that continue to affect them at the time of identification procedures. In a particular case in court, the psychologist can determine the reliability of the evidence of a particular witness and enable the judge and the jury to put the proper value on such witnesss testimony. For example, a witness may swear to a certain point involving the estimation of time and distance. The psychologist can measure the witnesss accuracy in such estimates, often showing that what the witness claims to be able to do is an impossibility. A case may hinge on whether an interval of time was ten minutes or twelve minutes, or whether a distance was three hundred or four hundred feet. A witness may swear positively to one or both of these points. The psychologist can show the court the limitations of the witness in making such estimates. Overview of Psychology and Law The service of psychology to law can be very great, but owing to the necessary conservatism of the courts, it will be a long time before they will make much use of psychological knowledge. Perhaps the greatest service will be in determining the credibility of evidence. Psychology can now give the general principles in this matter. Witnesses go on the stand and swear to all sorts of things as to what they heard and saw and did, often months and even years previously. The expert clinical psychologist can tell the court the probability of such evidence being true. Experiments have shown that there is a large percentage of error in such evidence. The additional value that comes from the oath has been measured. The oath increases the liability of truth only a small percentage. Psychologists sometimes provide expert testimony in the form of general testimony where theory and research is described and applied to a problem before the court. The expert would not provide opinions about any party involved in the case before the court, but might give opinions about substantive research that is relevant to the issues. Role of Psychology Professional in Forensic Matters Clinical-forensic psychologists are employed in a variety of settings including state forensic hospitals, court clinics, mental health centers, jails, prisons, and juvenile treatment centers. Clinical-forensic psychologists are perhaps best known for their assessment of persons involved with the legal system. Because of their knowledge of human behavior, abnormal psychology, and psychological assessment, psychologists are sometimes asked by the courts to evaluate a person and provide the court with an expert opinion, either in the form of a report or testimony. For example, clinical-forensic psychologists frequently evaluate adult criminal defendants or children involved in the juvenile justice system, offering the court information that might be relevant to determining (1) whether the defendant has a mental disorder that prevents him or her from going to trial, (2) what the defendants mental state may have been like at the time of the criminal offense, or (3) what treatment might be indicated for a particular defendant who has been convicted of a crime or juvenile offense. Increasingly, clinical-forensic psychologists are being called upon to evaluate defendants who have gone to trial and who have been found guilty and for whom one of the sentencing options is the death penalty. In this case, psychologists are asked to evaluate the mitigating circumstances of the case and to testify about these as they relate to the particular defendant. Clinical-forensic psychologists also evaluate persons in civil (i. e. , non-criminal) cases. These psychologists may evaluate persons who are undergoing guardianship proceedings, to assist the court in determining whether the person has a mental disorder that affects his or her ability to make important life decisions (e. g. , managing money, making health care decisions, making legal decisions). Clinical-forensic psychologists also evaluate persons who are plaintiffs in lawsuits, who allege that they were emotionally harmed as a result of someones wrongdoing or negligence. Clinical-forensic psychologists may evaluate children and their parents in cases of divorce, when parents cannot agree about the custody of their children and what is best for them. Clinical-forensic psychologists are sometimes called on to evaluate children to determine whether they have been abused or neglected and the effects of such abuse or neglect, and offer the court recommendations regarding the placement of such children. In addition to forensic assessment, clinical-forensic psychologists are also involved in treating persons who are involved with the legal system in some capacity. Jails, prisons, and juvenile facilities employ clinical psychologists to assess and treat adults and juveniles who are either awaiting trial, or who have been adjudicated and are serving a sentence of some type. Treatment in these settings is focused both on mental disorders and providing these persons with skills and behaviors that will decrease the likelihood that they will re-offend in the future. Clinical-forensic psychologists employed in mental health centers or in private practice may also treat persons involved in the legal system, providing either general or specialized treatment (e. g. treatment of sex offenders, treatment of violent or abusive persons, and treatment of abuse victims). Conclusion Studies confirm that unregulated eyewitness testimony is often â€Å"hopelessly unreliable. † Misidentifications are the greatest single source of wrongful convictions in the United States. Yet courts’ current due process analyses are unsuccessful in ensuring fair procedures and preventing wrongful convictions. A due process analysis alone is inadequate, in part because a due process analysis is essentially a fairness inquiry, and courts regard it as unfair to exclude a correct, yet suggestive identification, from evidence.

Friday, September 20, 2019

Underwater Acoustic Sensor Network (UASN)

Underwater Acoustic Sensor Network (UASN) CHAPTER1: Introduction Most of the earth surface is composed of water including fresh water from river, lakes etc and salt water from the sea. There are still many un-explored areas for such places. This needs significant research efforts and good communication systems. Wireless sensor network in aqueous medium has the ability to explore the underwater environment in details. For all applications of underwater, a good communication system as well as an effective routing protocol is needed. This will enable the underwater devices to communicate precisely. Underwater propagation speed varies with temperature, salinity and depth. By varying the underwater propagation speed at different depth, two scenarios can be achieved accurately namely: shallow and deep water. Shallow water consists of depth less than 200m and cylinder spreading. Deep water consists of depth greater or equal to 200 m and spherical spreading. In both shallow and deep water, different ambient noise and different spreading factor is applied. CHAPTER 2: Study of Underwater Acoustic Sensor Network (UASN) Application of UASN Wireless sensor network in aqueous medium also known as underwater sensor network has enabled a broad range of applications including: Environmental Monitoring Underwater sensor network can be used to monitor pollution like chemical, biological such as tracking of fish or micro-organisms, nuclear and oil leakage pollutions in bays, lakes or rivers [1]. Underwater sensor network can also be used to improve weather forecast, detect climate change, predict the effect of human activities on marine ecosystems, ocean currents and temperature change e.g. the global warming effect to ocean. Under Ocean Exploration Exploring minerals, oilfields or reservoir, determine routes for laying undersea cables and exploration valuable minerals can be done with such underwater sensor network. Disaster Prevention Sensor network that measure seismic activity from remote locations can provide tsunami warning to coastal areas, or study the effects of submarine earthquakes (seaquakes) [2] Equipment Monitoring Long-term equipment monitoring may be done with pre-installed infrastructure. Short-term equipment monitoring shares many requirements of long-term seismic monitoring, including the need for wireless (acoustic) communication, automatic configuration into a multihop network, localization (and hence time synchronization), and energy efficient operation Mine Reconnaissance By using acoustic sensors and optical sensors together, mine detection can be accomplished quickly and effectively. Assisted Monitoring Sensor can be used to discover danger on the seabed, locate dangerous rocks or shoals in shallow waters, mooring position, submerged wrecks and to perform bathymetry profiling. Information collection The main goal of communication network is the exchange of information inside the network and outside the network via a gateway or switch center. This application is used to share information among nodes and autonomous underwater vehicles. Characteristic of UASN Underwater Acoustic Networks (UANs), including but not limited to, Underwater Acoustic Sensor Networks (UASNs) and Autonomous Underwater Vehicle Networks (AUVNs) , are defined as networks composed of more than two nodes, using acoustic signals to communicate, for the purpose of underwater applications. UASNs and AUVNs are two important kinds of UANs. The former is composed of many sensor nodes, mostly for a monitoring purpose. The nodes are usually without or with limited capacity to move. The latter is composed of autonomous or unmanned vehicles with high mobility, deployed for applications that need mobility, e.g., exploration. An UAN can be an UASN, or an AUVN, or a combination of both. Acoustic communications, on the other hands, is defined as communication methods from one point to another by using acoustic signals. Network structure is not formed in acoustic point-to-point communications. Sound travels best through the water in comparison with electromagnetic waves and optical signals. Acoustic signal is sound signal waveform, usually produced by sonar for underwater applications. Acoustic signal processing extracts information from acoustic signals in the presence of noise and uncertainty. Underwater acoustic communications are mainly influenced by path loss, noise, multi-path, Doppler spread, and high and variable propagation delay. All these factors determine the temporal and spatial variability of the acoustic channel, and make the available bandwidth of the Underwater Acoustic channel (UW-A) limited and dramatically dependent on both range and frequency. Long-range systems that operate over several tens of kilometers may have a bandwidth of only a few kHz, while a short-range system operating over several tens of meters may have more than a hundred kHz bandwidth. These factors lead to low bit rate. Underwater acoustic communication links can be classified according to their range as very long, long, medium, short, and very short links. Acoustic links are also roughly classified as vertical and horizontal, according to the direction of the sound ray. Their propagation characteristics differ consistently, especially with respect to time dispersion, multi-path spreads, and delay variance. Acoustic signal is the only physical feasible tool that works in underwater environment. Compared with it, electromagnetic wave can only travel in water with short distance due to the high attenuation and absorption effect in underwater environment. It is found that the absorption of electromagnetic energy in sea water is about 45Ãâ€" ?f dB per kilometer, where f is frequency in Hertz; In contrast, the absorption of acoustic signal over most frequencies of interest is about three orders of magnitude lower [40]. Hereafter the factors that influence acoustic communications is analyzed in order to state the challenges posed by the underwater channels for underwater sensor networking. These include: Path loss Attenuation is mainly provoked by absorption due to conversion of acoustic energy into heat, which increases with distance and frequency. It is also caused by scattering a reverberation (on rough ocean surface and bottom), refraction, and dispersion (due to the displacement of the reflection point caused by wind on the surface). Water depth plays a key role in determining the attenuation. Geometric Spreading is the spreading of sound energy as a result of the expansion of the wavefronts. It increases with the propagation distance and is independent of frequency. There are two common kinds of geometric spreading: spherical (omni-directional point source), and cylindrical (horizontal radiation only). Noise Man made noise is mainly caused by machinery noise (pumps, reduction gears, power plants, etc.), and shipping activity (hull fouling, animal life on hull, cavitations), especially in areas encumbered with heavy vessel traffic. Ambient Noise is related to hydrodynamics (movement of water including tides, current, storms, wind, rain, etc.), seismic and biological phenomena. Multi-path Multi-path propagation may be responsible for severe degradation of the acoustic communication signal, since it generates Inter-Symbol Interference (ISI). The multi-path geometry depends on the link configuration. Vertical channels are characterized by little time dispersion, whereas horizontal channels may have extremely long multi-path spreads. The extent of the spreading is a strong function of depth and the distance between transmitter and receiver. High delay and delay variance The propagation speed in the UW-A channel is five orders of magnitude lower than in the radio channel. This large propagation delay (0.67 s/km) can reduce the throughput of the system considerably. The very high delay variance is even more harmful for efficient protocol design, as it prevents from accurately estimating the round trip time (RTT), which is the key parameter for many common communication protocols. Doppler spread The Doppler frequency spread can be significant in UW-A channels, causing degradation in the performance of digital communications: transmissions at a high data rate because many adjacent symbols to interfere at the receiver, requiring sophisticated signal processing to deal with the generated ISI. The Doppler spreading generates: a simple frequency translation, which is relatively easy for a receiver to compensate for a continuous spreading of frequencies, which constitutes a non-shifted signal, which is more difficult for a receiver to compensate for. If a channel has a Doppler spread with bandwidth B and a signal has symbol duration T, then there are approximately BT uncorrelated samples of its complex envelope. When BT is much less than unity, the channel is said to be under spread and the effects of the Doppler fading can be ignored, while, if greater than unity, it is overspread. Most of the described factors are caused by the chemical-physical properties of the water medium such as temperature, salinity and density, and by their spatio-temporal variations. These variations, together with the wave guide nature of the channel, because the acoustic channel to be temporally and spatially variable. In particular, the horizontal channel is by far more rapidly varying than the vertical channel, in both deep and shallow water. CHAPTER 3: Network Architecture Underwater sensor nodes: The underwater sensor nodes are deployed on the sea floor anchored to the ocean bottom [32]. The sensors are equipped with floating buoys to push the nodes upwards, thus they are relatively stationary nodes [3]. Using acoustic links, they relay data to underwater sink directly or via multi-hop path. Underwater sink nodes: Underwater sink nodes take charge of collecting data of underwater sensors deployed on the ocean bottom and then send to the surface sink node. They may be equipped with vertical and horizontal acoustic transducers. The horizontal transceiver is used to collect the sensors data and the vertical transceiver provides transmitting link between underwater sink and the surface sink node. Surface sink node: Surface sink node is attached on a floating buoy with satellite, radio frequency (RF) or cell phone technology to transmit data to shore in real time. 2D Model A reference architecture for two-dimensional underwater networks is shown in Figure. 1. A group of sensor nodes are anchored to the deep of the ocean. Underwater sensor nodes are interconnected to one or more underwater gateways by means of wireless acoustic links. Underwater-gateways are network devices in charge of relaying data from the ocean bottom network to a surface station. To achieve this objective, they are equipped with two acoustic transceivers, namely a vertical and a horizontal transceiver. The horizontal transceiver is used by the underwater-gateway to communicate with the sensor nodes in order to: send commands and configuration data to the sensors (underwater -gateway to sensors); collect monitored data (sensors to underwater -gateway). The vertical link is used by the underwater -gateways to relay data to a surface station. In deep water applications, vertical transceivers must be long range transceivers. The surface station is equipped with an acoustic transceiver that is able to handle multiple parallel communications with the deployed underwater -gateways. It is also endowed with a long range RF and/or satellite transmitter to communicate with the onshore sink (os-sink) and/or to a surface sink (s-sink). In shallow water, bottom-deployed sensors/modems may directly communicate with the surface buoy, with no specialized bottom node (underwater -gateway). 3D Model Three-dimensional underwater networks are used to detect and observe phenomena that cannot be adequately observed by means of ocean bottom sensor nodes, i.e., to perform cooperative sampling of the 3D ocean environment. In three-dimensional underwater networks, sensor nodes float at different depths to observe a phenomenon. In this architecture, given in Figure 2, each sensor is anchored to the ocean bottom and equipped with a floating buoy that can be inflated by a pump. The buoy pushes the sensor towards the ocean surface. The depth of the sensor can then be regulated by adjusting the length of the wire that connects the sensor to the anchor, by means of an electronically controlled engine that resides on the sensor. Sensing and communication coverage in a 3D environment are rigorously investigated in [8]. The diameter, minimum and maximum degree of the reachability graph that describes the network are derived as a function of the communication range, while different degrees of cov erage for the 3D environment are characterized as a function of the sensing range. 3D Model with AUV The above figure represents the third type of network architecture which consist of sensor nodes and Autonomous Underwater Vehicles (AUV) which act as mobile sensor nodes for ocean monitoring, underwater resource study, etc. CHAPTER 4: Differences between underwater and terrestrial Sensor Network An underwater acoustic channel is different from a ground-based radio channel from many aspects, including: Bandwidth is extremely limited. The attenuation of acoustic signal increases with frequency and range [6] [10]. Consequently, the feasible band is extremely small. For example, a short range system operating over several tens of meters may have available bandwidth of a hundred kHz; a medium-range system operating over several kilometers has a bandwidth on the order of ten kHz; and a long-range system operating over several tens of kilometers is limited to only a few kHz of bandwidth [11]. Propagation delay is long. The transmission speed of acoustic signals in salty water is around 1500 meter/s [22], which is a difference of five orders of magnitude lower than the speed of electromagnetic wave in free space. Correspondently, propagation delay in an underwater channel becomes significant. This is one of the essential characteristics of underwater channels and has profound implications on localization and time synchronization. The channel impulse response is not only spatially varied but also temporarily varied. The channel characteristics vary with time and highly depend on the location of the transmitter and receiver. The fluctuation nature of the channel causes the received signals easily distorted. There are two types of propagation paths: macro-multipaths, which are the deterministic propagation paths; and micro-multipath, which is a random signal fluctuation. The macro-multipaths are caused by both reflection at the boundaries (bottom, surface and any object in the water) and bending. Inter- Symbol Interference (ISI) thus occurs. Compared with the spread of its ground-based counterpart, which is on the order of several symbol intervals, ISI spreading in an underwater acoustic channel is several tens or hundred of symbol intervals for moderate to high data rate in the horizontal channel. Micro-multipath fluctuations are mainly caused by surface wave, which contributes the most to the time variability of shallow water channel. In deep water, internal waves impact the single-path random fluctuations [12][13]. Probability of bit error is much higher and temporary loss of connectivity (shadow zone) sometimes occurs, due to the extreme characteristics of the channel. Cost. While terrestrial sensor nodes are expected to become increasingly inexpensive, underwater sensors are expensive devices. This is especially due to the more complex underwater transceivers and to the hardware protection needed in the extreme underwater environment. Also, because of the low economy of scale caused by a small relative number of suppliers, underwater sensors are characterized by high cost. Deployment. While terrestrial sensor networks are densely deployed, in underwater, the deployment is generally more sparse. Power. The power needed for acoustic underwater communications is higher than in terrestrial radio communications because of the different physical layer technology (acoustic vs. RF waves), the higher distances, and more complex signal processing techniques implemented at the receivers to compensate for the impairments of the channel. Memory. While terrestrial sensor nodes have very limited storage capacity, underwater-sensors may need to be able to do some data caching as the underwater channel may be intermittent. Spatial Correlation. While the readings from terrestrial sensors are often correlated, this is more unlikely to happen in underwater networks due to the higher distance among sensors. CHAPTER 5: Layered of UASN The underwater architecture network consists of five layers, application, transport, network, data link and physical layer as shown in the figure below. As typical underwater systems have limited processing capability, the protocol has been kept as simple as possible without significantly compromising performance. The underwater sensor network specifications currently do not include any recommendations for authentication and encryption. These may be easily implemented at the application layer or via a spreading scheme at the physical layer. Each layer is described by a SAPI. The SAPI is defined in terms of messages being passed to and from the layer. The clients (usually higher layers) of a layer invoke the layer via a request (REQ). The layer responds to each REQ by a response (RSP). Errors are reported via an ERR RSP with error codes. If the layer needs to send unsolicited messages to the client, it does so via a notification (NTF). A layer communicates logically with its peer layer via protocol data units (PDU). As the peer-to-peer communication is symmetric, a layer may send a REQ PDU to its peer layer at any time. It would optionally respond to such a PDU with a RSP PDU. This is logically depicted in Figure below It may be desirable in some cases, that non-neighboring layers communicate with each other to achieve cross-layer optimization. This may be implemented by allowing REQ and RSP PDUs between any two layers in the protocol stack. The underwater sensor network specifications define detailed message structures for all SAPI messages. These message structures include message identifiers, data formats to be used, parameters and their possible values Physical layer The physical layer provides framing, modulation and error correction capability (via FEC). It provides primitives for sending and receiving packets. It may also provide additional functionality such as parameter settings, parameter recommendation, carrier sensing, etc. At first underwater channel development was based on non-coherent frequency shift keying (FSK) modulation, since it relies on energy detection. Thus, it does not require phase tracking, which is a very difficult task mainly because of the Doppler-spread in the underwater acoustic channel. Although non-coherent modulation schemes are characterized by high power efficiency, their low bandwidth efficiency makes them unsuitable for high data rate multiuser networks. Hence, coherent modulation techniques have been developed for long-range, high-throughput systems. In the last years, fully coherent modulation techniques, such as phase shift keying (PSK) and quadrature amplitude modulation (QAM), have become practical due to the availability of powerful digital processing. Channel equalization techniques are exploited to leverage the effect of the inter-symbol interference (ISI), instead of trying to avoid or suppress it. Decision-feedback equalizers (DFEs) track the complex, relatively slowly varying channel response and thus provide high throughput when the channel is slowly varying. Conversely, when the channel varies faster, it is necessary to combine the DFE with a Phase Locked Loop (PLL) [9], which estimates and compensates for the phase offset in a rapid, stable manner. The use of decision feedback equalization and phase-locked loops is driven by the complexity and time variability of ocean channel impulse responses. Differential phase shift keying (DPSK) serves as an intermediate solution between incoherent and fully coherent systems in terms of bandwidth efficiency. DPSK encodes information relative to the previous symbol rather than to an arbitrary fixed reference in the signal phase and may be referred to as a partially coherent modulation. While this strategy substantially alleviates carrier phase-tracking requirements, the penalty is an increased error probability over PSK at an equivalent data rate. Another promising solution for underwater communications is the orthogonal frequency division multiplexing (OFDM) spread spectrum technique, which is particularly efficient when noise is spread over a large portion of the available bandwidth. OFDM is frequently referred to as multicarrier modulation because it transmits signals over multiple sub-carriers simultaneously. In particular, sub-carriers that experience higher SNR, are allotted with a higher number of bits, whereas less bits are allotted to sub-carriers experiencing attenuation, according to the concept of bit loading, which requires channel estimation. Since the symbol duration for each individual carrier increases, OFDM systems perform robustly in severe multi-path environments, and achieve a high spectral efficiency. Many of the techniques discussed above require underwater channel estimation, which can be achieved by means of probe packets [17]. An accurate estimate of the channel can be obtained with a high probing rate and/or with a large probe packet size, which however result in high overhead, and in the consequent drain of channel capacity and energy. Data link layer (MAC layer) The data link layer provides single hop data transmission capability; it will not be able to transmit a packet successfully if the destination node is not directly accessible from the source node. It may include some degree of reliability. It may also provide error detection capability (e.g. CRC check). In case of a shared medium, the data link layer must include the medium access control (MAC) sub-layer. Frequency division multiple access (FDMA) is not suitable for underwater sensor network due to the narrow bandwidth in underwater acoustic channels and the vulnerability of limited band systems to fading and multipath. Time division multiple access (TDMA) shows limited bandwidth efficiency because of the long time guards required in the underwater acoustic channel. In fact, long time guards must be designed to account for the large propagation delay and delay variance of the underwater channel in order to minimize packet collisions from adjacent time slots. Moreover, the variable delay makes it very challenging to realize a precise synchronization, with a common timing reference, which is required for TDMA. Carrier sense multiple access (CSMA) prevents collisions with the ongoing transmission at the transmitter side. To prevent collisions at the receiver side, however, it is necessary to add a guard time between transmissions dimensioned according to the maximum propagation delay in the network. This makes the protocol dramatically inefficient for underwater acoustic sensor network. The use of contention-based techniques that rely on handshaking mechanisms such as RTS/ CTS in shared medium access is impractical in underwater, for the following reasons: large delays in the propagation of RTS/CTS control packets lead to low throughput; due to the high propagation delay of underwater acoustic channels, when carrier sense is used, as in 802.11, it is more likely that the channel be sensed idle while a transmission is ongoing, since the signal may not have reached the receiver yet; the high variability of delay in handshaking packets makes it impractical to predict the start and finish time of the transmissions of other stations. Thus, collisions are highly likely to occur. Code division multiple access (CDMA) is quite robust to frequency selective fading caused by underwater multi-paths, since it distinguishes simultaneous signals transmitted by multiple devices by means of pseudo-noise codes that are used for spreading the user signal over the entire available band. CDMA allows reducing the number of packet retransmissions, which results in decreased battery consumption and increased network throughput. In conclusion, although the high delay spread which characterizes the horizontal link in underwater channels makes it difficult to maintain synchronization among the stations, especially when orthogonal code techniques are used [17], CDMA is a promising multiple access technique for underwater acoustic networks. This is particularly true in shallow water, where multi-paths and Doppler- spreading plays a key role in the communication performance. Network layer (Routing) The network layer is in charge of determining the path between a source (the sensor that samples a physical phenomenon) and a destination node (usually the surface station). In general, while many impairments of the underwater acoustic channel are adequately addressed at the physical and data link layers, some other characteristics, such as the extremely long propagation delays, are better addressed at the network layer. Basically, there are two methods of routing. The first one is virtual circuit routing and the second one is packet-switch routing. In virtual circuit routing, the networks use virtual circuits to decide on the path at the beginning of the network operation. Virtual-circuit-switch routing protocols can be a better choice for underwater acoustic networks. The reasons are: Underwater acoustic networks are typical asymmetric instead of symmetric. However, packet switched routing protocols are proposed for symmetric network architecture; Virtual-circuit-switch routing protocols work robust against link failure, which is critical in underwater environment; and Virtual-circuit-switch routing protocols have less signal overhead and low latency, which are needed for underwater acoustic channel environment. However, virtual-circuit-switch routing protocols usually lack of flexibility. In packet-switch routing, every node that is part of the transmission makes its own routing decision, i.e., decides its next hop to relay the packet. Packet-switch routing can be further classified into Proactive routing, Reactive and geographical routing protocols. Most routing protocols for ground-based wireless networks are packet-switch based. Proactive routing protocols attempt to minimize the message latency by maintaining up-to-date routing information at all times from each node to any other node. It broadcasts control packets that contain routing table information. Typical protocols include Destination Sequence Distance Vector (DSDV) [28] and Temporally Ordered Routing Algorithm (TORA). However, proactive routing protocols provoke a large signaling overhead to establish routes for the first time and each time the network topology changes. It may not be a good fit in underwater environment due to the high probability of link failure and extremely limited bandwidth there. Reactive routing protocols only initiate a route discovery process upon request. Correspondently, each node does not need to maintain a sizable look-up table for routing. This kind of routing protocols is more suitable for dynamic environment like ad hoc wireless networks. Typical protocol examples are Ad hoc On-demand Distance Vector (AODV) [23], and Dynamic Source Routing (DSR) [27]. The shortage of reactive routing protocols is its high latency to establish routing. Similar to its proactive counterpart, flooding of control packets to establish paths is needed, which brings significant signal overhead. The high latency could become much deteriorated in underwater environment because of the much slower propagation speed of acoustic signal compared with the radio wave in the air. Geographic routing (also called georouting or position-based routing) is a routing principle that relies on geographic position information. It is mainly proposed for wireless networks and based on the idea that the source sends a message to the geographic location of the destination instead of using the network address. Geographic routing requires that each node can determine its own location and that the source is aware of the location of the destination. With this information a message can be routed to the destination without knowledge of the network topology or a prior route discovery. Transport layer A transport layer protocol is needed in underwater sensor network not only to achieve reliable collective transport of event features, but also to perform flow control and congestion control. The primary objective is to save scarce sensor resources and increase the network efficiency. A reliable transport protocol should guarantee that the applications be able to correctly identify event features estimated by the sensor network. Congestion control is needed to prevent the network from being congested by excessive data with respect to the network capacity, while flow control is needed to avoid that network devices with limited memory are overwhelmed by data transmissions. Most existing TCP implementations are unsuited for the underwater environment, since the flow control functionality is based on a window- based mechanism that relies on an accurate esteem of the round trip time (RTT), which is twice the end-to-end delay from source to destination. Rate-based transport protocols seem also unsuited for this challenging environment. They still rely on feedback control messages sent back by the destination to dynamically adapt the transmission rate, i.e., to decrease the transmission rate when packet loss is experienced or to increase it otherwise. The high delay and delay variance can thus cause instability in the feedback control. Furthermore, due to the unreliability of the acoustic channel, it is necessary to distinguish between packet losses due to the high bit error rate of the acoustic channel, from those caused by packets being dropped from the queues of sensor nodes due to network congestion. In terrestrial, assume that congestion is the only cause for packet loss and the solution lies on decreasing the transmission rate, but in underwater sensor network if the packet loss is due to bad channel then the transmission rate should not be decreased to preserve throughput efficiency. Transport layer functionalities can be tightly integrated with data link layer functionalities in a cross-layer module. The purpose of such an integrated module is to make the information about the condition of the variable underwater channel available also at the transport layer. In fact, usually the state of the channel is known only at the physical and channel access sub-layers, while the design principle of layer separation makes this information transparent to the higher layers. This integration allows maximizing the Underwater Acoustic Sensor Network (UASN) Underwater Acoustic Sensor Network (UASN) CHAPTER1: Introduction Most of the earth surface is composed of water including fresh water from river, lakes etc and salt water from the sea. There are still many un-explored areas for such places. This needs significant research efforts and good communication systems. Wireless sensor network in aqueous medium has the ability to explore the underwater environment in details. For all applications of underwater, a good communication system as well as an effective routing protocol is needed. This will enable the underwater devices to communicate precisely. Underwater propagation speed varies with temperature, salinity and depth. By varying the underwater propagation speed at different depth, two scenarios can be achieved accurately namely: shallow and deep water. Shallow water consists of depth less than 200m and cylinder spreading. Deep water consists of depth greater or equal to 200 m and spherical spreading. In both shallow and deep water, different ambient noise and different spreading factor is applied. CHAPTER 2: Study of Underwater Acoustic Sensor Network (UASN) Application of UASN Wireless sensor network in aqueous medium also known as underwater sensor network has enabled a broad range of applications including: Environmental Monitoring Underwater sensor network can be used to monitor pollution like chemical, biological such as tracking of fish or micro-organisms, nuclear and oil leakage pollutions in bays, lakes or rivers [1]. Underwater sensor network can also be used to improve weather forecast, detect climate change, predict the effect of human activities on marine ecosystems, ocean currents and temperature change e.g. the global warming effect to ocean. Under Ocean Exploration Exploring minerals, oilfields or reservoir, determine routes for laying undersea cables and exploration valuable minerals can be done with such underwater sensor network. Disaster Prevention Sensor network that measure seismic activity from remote locations can provide tsunami warning to coastal areas, or study the effects of submarine earthquakes (seaquakes) [2] Equipment Monitoring Long-term equipment monitoring may be done with pre-installed infrastructure. Short-term equipment monitoring shares many requirements of long-term seismic monitoring, including the need for wireless (acoustic) communication, automatic configuration into a multihop network, localization (and hence time synchronization), and energy efficient operation Mine Reconnaissance By using acoustic sensors and optical sensors together, mine detection can be accomplished quickly and effectively. Assisted Monitoring Sensor can be used to discover danger on the seabed, locate dangerous rocks or shoals in shallow waters, mooring position, submerged wrecks and to perform bathymetry profiling. Information collection The main goal of communication network is the exchange of information inside the network and outside the network via a gateway or switch center. This application is used to share information among nodes and autonomous underwater vehicles. Characteristic of UASN Underwater Acoustic Networks (UANs), including but not limited to, Underwater Acoustic Sensor Networks (UASNs) and Autonomous Underwater Vehicle Networks (AUVNs) , are defined as networks composed of more than two nodes, using acoustic signals to communicate, for the purpose of underwater applications. UASNs and AUVNs are two important kinds of UANs. The former is composed of many sensor nodes, mostly for a monitoring purpose. The nodes are usually without or with limited capacity to move. The latter is composed of autonomous or unmanned vehicles with high mobility, deployed for applications that need mobility, e.g., exploration. An UAN can be an UASN, or an AUVN, or a combination of both. Acoustic communications, on the other hands, is defined as communication methods from one point to another by using acoustic signals. Network structure is not formed in acoustic point-to-point communications. Sound travels best through the water in comparison with electromagnetic waves and optical signals. Acoustic signal is sound signal waveform, usually produced by sonar for underwater applications. Acoustic signal processing extracts information from acoustic signals in the presence of noise and uncertainty. Underwater acoustic communications are mainly influenced by path loss, noise, multi-path, Doppler spread, and high and variable propagation delay. All these factors determine the temporal and spatial variability of the acoustic channel, and make the available bandwidth of the Underwater Acoustic channel (UW-A) limited and dramatically dependent on both range and frequency. Long-range systems that operate over several tens of kilometers may have a bandwidth of only a few kHz, while a short-range system operating over several tens of meters may have more than a hundred kHz bandwidth. These factors lead to low bit rate. Underwater acoustic communication links can be classified according to their range as very long, long, medium, short, and very short links. Acoustic links are also roughly classified as vertical and horizontal, according to the direction of the sound ray. Their propagation characteristics differ consistently, especially with respect to time dispersion, multi-path spreads, and delay variance. Acoustic signal is the only physical feasible tool that works in underwater environment. Compared with it, electromagnetic wave can only travel in water with short distance due to the high attenuation and absorption effect in underwater environment. It is found that the absorption of electromagnetic energy in sea water is about 45Ãâ€" ?f dB per kilometer, where f is frequency in Hertz; In contrast, the absorption of acoustic signal over most frequencies of interest is about three orders of magnitude lower [40]. Hereafter the factors that influence acoustic communications is analyzed in order to state the challenges posed by the underwater channels for underwater sensor networking. These include: Path loss Attenuation is mainly provoked by absorption due to conversion of acoustic energy into heat, which increases with distance and frequency. It is also caused by scattering a reverberation (on rough ocean surface and bottom), refraction, and dispersion (due to the displacement of the reflection point caused by wind on the surface). Water depth plays a key role in determining the attenuation. Geometric Spreading is the spreading of sound energy as a result of the expansion of the wavefronts. It increases with the propagation distance and is independent of frequency. There are two common kinds of geometric spreading: spherical (omni-directional point source), and cylindrical (horizontal radiation only). Noise Man made noise is mainly caused by machinery noise (pumps, reduction gears, power plants, etc.), and shipping activity (hull fouling, animal life on hull, cavitations), especially in areas encumbered with heavy vessel traffic. Ambient Noise is related to hydrodynamics (movement of water including tides, current, storms, wind, rain, etc.), seismic and biological phenomena. Multi-path Multi-path propagation may be responsible for severe degradation of the acoustic communication signal, since it generates Inter-Symbol Interference (ISI). The multi-path geometry depends on the link configuration. Vertical channels are characterized by little time dispersion, whereas horizontal channels may have extremely long multi-path spreads. The extent of the spreading is a strong function of depth and the distance between transmitter and receiver. High delay and delay variance The propagation speed in the UW-A channel is five orders of magnitude lower than in the radio channel. This large propagation delay (0.67 s/km) can reduce the throughput of the system considerably. The very high delay variance is even more harmful for efficient protocol design, as it prevents from accurately estimating the round trip time (RTT), which is the key parameter for many common communication protocols. Doppler spread The Doppler frequency spread can be significant in UW-A channels, causing degradation in the performance of digital communications: transmissions at a high data rate because many adjacent symbols to interfere at the receiver, requiring sophisticated signal processing to deal with the generated ISI. The Doppler spreading generates: a simple frequency translation, which is relatively easy for a receiver to compensate for a continuous spreading of frequencies, which constitutes a non-shifted signal, which is more difficult for a receiver to compensate for. If a channel has a Doppler spread with bandwidth B and a signal has symbol duration T, then there are approximately BT uncorrelated samples of its complex envelope. When BT is much less than unity, the channel is said to be under spread and the effects of the Doppler fading can be ignored, while, if greater than unity, it is overspread. Most of the described factors are caused by the chemical-physical properties of the water medium such as temperature, salinity and density, and by their spatio-temporal variations. These variations, together with the wave guide nature of the channel, because the acoustic channel to be temporally and spatially variable. In particular, the horizontal channel is by far more rapidly varying than the vertical channel, in both deep and shallow water. CHAPTER 3: Network Architecture Underwater sensor nodes: The underwater sensor nodes are deployed on the sea floor anchored to the ocean bottom [32]. The sensors are equipped with floating buoys to push the nodes upwards, thus they are relatively stationary nodes [3]. Using acoustic links, they relay data to underwater sink directly or via multi-hop path. Underwater sink nodes: Underwater sink nodes take charge of collecting data of underwater sensors deployed on the ocean bottom and then send to the surface sink node. They may be equipped with vertical and horizontal acoustic transducers. The horizontal transceiver is used to collect the sensors data and the vertical transceiver provides transmitting link between underwater sink and the surface sink node. Surface sink node: Surface sink node is attached on a floating buoy with satellite, radio frequency (RF) or cell phone technology to transmit data to shore in real time. 2D Model A reference architecture for two-dimensional underwater networks is shown in Figure. 1. A group of sensor nodes are anchored to the deep of the ocean. Underwater sensor nodes are interconnected to one or more underwater gateways by means of wireless acoustic links. Underwater-gateways are network devices in charge of relaying data from the ocean bottom network to a surface station. To achieve this objective, they are equipped with two acoustic transceivers, namely a vertical and a horizontal transceiver. The horizontal transceiver is used by the underwater-gateway to communicate with the sensor nodes in order to: send commands and configuration data to the sensors (underwater -gateway to sensors); collect monitored data (sensors to underwater -gateway). The vertical link is used by the underwater -gateways to relay data to a surface station. In deep water applications, vertical transceivers must be long range transceivers. The surface station is equipped with an acoustic transceiver that is able to handle multiple parallel communications with the deployed underwater -gateways. It is also endowed with a long range RF and/or satellite transmitter to communicate with the onshore sink (os-sink) and/or to a surface sink (s-sink). In shallow water, bottom-deployed sensors/modems may directly communicate with the surface buoy, with no specialized bottom node (underwater -gateway). 3D Model Three-dimensional underwater networks are used to detect and observe phenomena that cannot be adequately observed by means of ocean bottom sensor nodes, i.e., to perform cooperative sampling of the 3D ocean environment. In three-dimensional underwater networks, sensor nodes float at different depths to observe a phenomenon. In this architecture, given in Figure 2, each sensor is anchored to the ocean bottom and equipped with a floating buoy that can be inflated by a pump. The buoy pushes the sensor towards the ocean surface. The depth of the sensor can then be regulated by adjusting the length of the wire that connects the sensor to the anchor, by means of an electronically controlled engine that resides on the sensor. Sensing and communication coverage in a 3D environment are rigorously investigated in [8]. The diameter, minimum and maximum degree of the reachability graph that describes the network are derived as a function of the communication range, while different degrees of cov erage for the 3D environment are characterized as a function of the sensing range. 3D Model with AUV The above figure represents the third type of network architecture which consist of sensor nodes and Autonomous Underwater Vehicles (AUV) which act as mobile sensor nodes for ocean monitoring, underwater resource study, etc. CHAPTER 4: Differences between underwater and terrestrial Sensor Network An underwater acoustic channel is different from a ground-based radio channel from many aspects, including: Bandwidth is extremely limited. The attenuation of acoustic signal increases with frequency and range [6] [10]. Consequently, the feasible band is extremely small. For example, a short range system operating over several tens of meters may have available bandwidth of a hundred kHz; a medium-range system operating over several kilometers has a bandwidth on the order of ten kHz; and a long-range system operating over several tens of kilometers is limited to only a few kHz of bandwidth [11]. Propagation delay is long. The transmission speed of acoustic signals in salty water is around 1500 meter/s [22], which is a difference of five orders of magnitude lower than the speed of electromagnetic wave in free space. Correspondently, propagation delay in an underwater channel becomes significant. This is one of the essential characteristics of underwater channels and has profound implications on localization and time synchronization. The channel impulse response is not only spatially varied but also temporarily varied. The channel characteristics vary with time and highly depend on the location of the transmitter and receiver. The fluctuation nature of the channel causes the received signals easily distorted. There are two types of propagation paths: macro-multipaths, which are the deterministic propagation paths; and micro-multipath, which is a random signal fluctuation. The macro-multipaths are caused by both reflection at the boundaries (bottom, surface and any object in the water) and bending. Inter- Symbol Interference (ISI) thus occurs. Compared with the spread of its ground-based counterpart, which is on the order of several symbol intervals, ISI spreading in an underwater acoustic channel is several tens or hundred of symbol intervals for moderate to high data rate in the horizontal channel. Micro-multipath fluctuations are mainly caused by surface wave, which contributes the most to the time variability of shallow water channel. In deep water, internal waves impact the single-path random fluctuations [12][13]. Probability of bit error is much higher and temporary loss of connectivity (shadow zone) sometimes occurs, due to the extreme characteristics of the channel. Cost. While terrestrial sensor nodes are expected to become increasingly inexpensive, underwater sensors are expensive devices. This is especially due to the more complex underwater transceivers and to the hardware protection needed in the extreme underwater environment. Also, because of the low economy of scale caused by a small relative number of suppliers, underwater sensors are characterized by high cost. Deployment. While terrestrial sensor networks are densely deployed, in underwater, the deployment is generally more sparse. Power. The power needed for acoustic underwater communications is higher than in terrestrial radio communications because of the different physical layer technology (acoustic vs. RF waves), the higher distances, and more complex signal processing techniques implemented at the receivers to compensate for the impairments of the channel. Memory. While terrestrial sensor nodes have very limited storage capacity, underwater-sensors may need to be able to do some data caching as the underwater channel may be intermittent. Spatial Correlation. While the readings from terrestrial sensors are often correlated, this is more unlikely to happen in underwater networks due to the higher distance among sensors. CHAPTER 5: Layered of UASN The underwater architecture network consists of five layers, application, transport, network, data link and physical layer as shown in the figure below. As typical underwater systems have limited processing capability, the protocol has been kept as simple as possible without significantly compromising performance. The underwater sensor network specifications currently do not include any recommendations for authentication and encryption. These may be easily implemented at the application layer or via a spreading scheme at the physical layer. Each layer is described by a SAPI. The SAPI is defined in terms of messages being passed to and from the layer. The clients (usually higher layers) of a layer invoke the layer via a request (REQ). The layer responds to each REQ by a response (RSP). Errors are reported via an ERR RSP with error codes. If the layer needs to send unsolicited messages to the client, it does so via a notification (NTF). A layer communicates logically with its peer layer via protocol data units (PDU). As the peer-to-peer communication is symmetric, a layer may send a REQ PDU to its peer layer at any time. It would optionally respond to such a PDU with a RSP PDU. This is logically depicted in Figure below It may be desirable in some cases, that non-neighboring layers communicate with each other to achieve cross-layer optimization. This may be implemented by allowing REQ and RSP PDUs between any two layers in the protocol stack. The underwater sensor network specifications define detailed message structures for all SAPI messages. These message structures include message identifiers, data formats to be used, parameters and their possible values Physical layer The physical layer provides framing, modulation and error correction capability (via FEC). It provides primitives for sending and receiving packets. It may also provide additional functionality such as parameter settings, parameter recommendation, carrier sensing, etc. At first underwater channel development was based on non-coherent frequency shift keying (FSK) modulation, since it relies on energy detection. Thus, it does not require phase tracking, which is a very difficult task mainly because of the Doppler-spread in the underwater acoustic channel. Although non-coherent modulation schemes are characterized by high power efficiency, their low bandwidth efficiency makes them unsuitable for high data rate multiuser networks. Hence, coherent modulation techniques have been developed for long-range, high-throughput systems. In the last years, fully coherent modulation techniques, such as phase shift keying (PSK) and quadrature amplitude modulation (QAM), have become practical due to the availability of powerful digital processing. Channel equalization techniques are exploited to leverage the effect of the inter-symbol interference (ISI), instead of trying to avoid or suppress it. Decision-feedback equalizers (DFEs) track the complex, relatively slowly varying channel response and thus provide high throughput when the channel is slowly varying. Conversely, when the channel varies faster, it is necessary to combine the DFE with a Phase Locked Loop (PLL) [9], which estimates and compensates for the phase offset in a rapid, stable manner. The use of decision feedback equalization and phase-locked loops is driven by the complexity and time variability of ocean channel impulse responses. Differential phase shift keying (DPSK) serves as an intermediate solution between incoherent and fully coherent systems in terms of bandwidth efficiency. DPSK encodes information relative to the previous symbol rather than to an arbitrary fixed reference in the signal phase and may be referred to as a partially coherent modulation. While this strategy substantially alleviates carrier phase-tracking requirements, the penalty is an increased error probability over PSK at an equivalent data rate. Another promising solution for underwater communications is the orthogonal frequency division multiplexing (OFDM) spread spectrum technique, which is particularly efficient when noise is spread over a large portion of the available bandwidth. OFDM is frequently referred to as multicarrier modulation because it transmits signals over multiple sub-carriers simultaneously. In particular, sub-carriers that experience higher SNR, are allotted with a higher number of bits, whereas less bits are allotted to sub-carriers experiencing attenuation, according to the concept of bit loading, which requires channel estimation. Since the symbol duration for each individual carrier increases, OFDM systems perform robustly in severe multi-path environments, and achieve a high spectral efficiency. Many of the techniques discussed above require underwater channel estimation, which can be achieved by means of probe packets [17]. An accurate estimate of the channel can be obtained with a high probing rate and/or with a large probe packet size, which however result in high overhead, and in the consequent drain of channel capacity and energy. Data link layer (MAC layer) The data link layer provides single hop data transmission capability; it will not be able to transmit a packet successfully if the destination node is not directly accessible from the source node. It may include some degree of reliability. It may also provide error detection capability (e.g. CRC check). In case of a shared medium, the data link layer must include the medium access control (MAC) sub-layer. Frequency division multiple access (FDMA) is not suitable for underwater sensor network due to the narrow bandwidth in underwater acoustic channels and the vulnerability of limited band systems to fading and multipath. Time division multiple access (TDMA) shows limited bandwidth efficiency because of the long time guards required in the underwater acoustic channel. In fact, long time guards must be designed to account for the large propagation delay and delay variance of the underwater channel in order to minimize packet collisions from adjacent time slots. Moreover, the variable delay makes it very challenging to realize a precise synchronization, with a common timing reference, which is required for TDMA. Carrier sense multiple access (CSMA) prevents collisions with the ongoing transmission at the transmitter side. To prevent collisions at the receiver side, however, it is necessary to add a guard time between transmissions dimensioned according to the maximum propagation delay in the network. This makes the protocol dramatically inefficient for underwater acoustic sensor network. The use of contention-based techniques that rely on handshaking mechanisms such as RTS/ CTS in shared medium access is impractical in underwater, for the following reasons: large delays in the propagation of RTS/CTS control packets lead to low throughput; due to the high propagation delay of underwater acoustic channels, when carrier sense is used, as in 802.11, it is more likely that the channel be sensed idle while a transmission is ongoing, since the signal may not have reached the receiver yet; the high variability of delay in handshaking packets makes it impractical to predict the start and finish time of the transmissions of other stations. Thus, collisions are highly likely to occur. Code division multiple access (CDMA) is quite robust to frequency selective fading caused by underwater multi-paths, since it distinguishes simultaneous signals transmitted by multiple devices by means of pseudo-noise codes that are used for spreading the user signal over the entire available band. CDMA allows reducing the number of packet retransmissions, which results in decreased battery consumption and increased network throughput. In conclusion, although the high delay spread which characterizes the horizontal link in underwater channels makes it difficult to maintain synchronization among the stations, especially when orthogonal code techniques are used [17], CDMA is a promising multiple access technique for underwater acoustic networks. This is particularly true in shallow water, where multi-paths and Doppler- spreading plays a key role in the communication performance. Network layer (Routing) The network layer is in charge of determining the path between a source (the sensor that samples a physical phenomenon) and a destination node (usually the surface station). In general, while many impairments of the underwater acoustic channel are adequately addressed at the physical and data link layers, some other characteristics, such as the extremely long propagation delays, are better addressed at the network layer. Basically, there are two methods of routing. The first one is virtual circuit routing and the second one is packet-switch routing. In virtual circuit routing, the networks use virtual circuits to decide on the path at the beginning of the network operation. Virtual-circuit-switch routing protocols can be a better choice for underwater acoustic networks. The reasons are: Underwater acoustic networks are typical asymmetric instead of symmetric. However, packet switched routing protocols are proposed for symmetric network architecture; Virtual-circuit-switch routing protocols work robust against link failure, which is critical in underwater environment; and Virtual-circuit-switch routing protocols have less signal overhead and low latency, which are needed for underwater acoustic channel environment. However, virtual-circuit-switch routing protocols usually lack of flexibility. In packet-switch routing, every node that is part of the transmission makes its own routing decision, i.e., decides its next hop to relay the packet. Packet-switch routing can be further classified into Proactive routing, Reactive and geographical routing protocols. Most routing protocols for ground-based wireless networks are packet-switch based. Proactive routing protocols attempt to minimize the message latency by maintaining up-to-date routing information at all times from each node to any other node. It broadcasts control packets that contain routing table information. Typical protocols include Destination Sequence Distance Vector (DSDV) [28] and Temporally Ordered Routing Algorithm (TORA). However, proactive routing protocols provoke a large signaling overhead to establish routes for the first time and each time the network topology changes. It may not be a good fit in underwater environment due to the high probability of link failure and extremely limited bandwidth there. Reactive routing protocols only initiate a route discovery process upon request. Correspondently, each node does not need to maintain a sizable look-up table for routing. This kind of routing protocols is more suitable for dynamic environment like ad hoc wireless networks. Typical protocol examples are Ad hoc On-demand Distance Vector (AODV) [23], and Dynamic Source Routing (DSR) [27]. The shortage of reactive routing protocols is its high latency to establish routing. Similar to its proactive counterpart, flooding of control packets to establish paths is needed, which brings significant signal overhead. The high latency could become much deteriorated in underwater environment because of the much slower propagation speed of acoustic signal compared with the radio wave in the air. Geographic routing (also called georouting or position-based routing) is a routing principle that relies on geographic position information. It is mainly proposed for wireless networks and based on the idea that the source sends a message to the geographic location of the destination instead of using the network address. Geographic routing requires that each node can determine its own location and that the source is aware of the location of the destination. With this information a message can be routed to the destination without knowledge of the network topology or a prior route discovery. Transport layer A transport layer protocol is needed in underwater sensor network not only to achieve reliable collective transport of event features, but also to perform flow control and congestion control. The primary objective is to save scarce sensor resources and increase the network efficiency. A reliable transport protocol should guarantee that the applications be able to correctly identify event features estimated by the sensor network. Congestion control is needed to prevent the network from being congested by excessive data with respect to the network capacity, while flow control is needed to avoid that network devices with limited memory are overwhelmed by data transmissions. Most existing TCP implementations are unsuited for the underwater environment, since the flow control functionality is based on a window- based mechanism that relies on an accurate esteem of the round trip time (RTT), which is twice the end-to-end delay from source to destination. Rate-based transport protocols seem also unsuited for this challenging environment. They still rely on feedback control messages sent back by the destination to dynamically adapt the transmission rate, i.e., to decrease the transmission rate when packet loss is experienced or to increase it otherwise. The high delay and delay variance can thus cause instability in the feedback control. Furthermore, due to the unreliability of the acoustic channel, it is necessary to distinguish between packet losses due to the high bit error rate of the acoustic channel, from those caused by packets being dropped from the queues of sensor nodes due to network congestion. In terrestrial, assume that congestion is the only cause for packet loss and the solution lies on decreasing the transmission rate, but in underwater sensor network if the packet loss is due to bad channel then the transmission rate should not be decreased to preserve throughput efficiency. Transport layer functionalities can be tightly integrated with data link layer functionalities in a cross-layer module. The purpose of such an integrated module is to make the information about the condition of the variable underwater channel available also at the transport layer. In fact, usually the state of the channel is known only at the physical and channel access sub-layers, while the design principle of layer separation makes this information transparent to the higher layers. This integration allows maximizing the