Select Page
  

ntroduction to communications report the subject will be about : (5G Sides Safely off the Hype Curve and Makes a Nice Boring Landing)this is a project consists of 2 parts :first is the Final Project proposalsecond the final project reportplease no plagiarism !!! 0% similarityplease 0% in turnitinthe guidelines for the project and proposal will be in the attachment ,also i attached samples for the proposal and the final project .
guidelines_for_final_project_and_proposal.docx

sample_final__project.docx

Don't use plagiarized sources. Get Your Custom Essay on
Technology Integration
Just from $10/Page
Order Essay

sample_project_proposal.docx

Unformatted Attachment Preview

Guidelines for Final Project
Notes:
1. A Final Project proposal is due as part of the grade of the Final Project. The proposal
should include the tentative title of the project, a description of the project, and three
references (books, journal or symposium articles only) that are relevant to the project topic.
The total length of the proposal should be about one page. The proposal should be sent by
e-attachment the project is not approved until I explicitly state that it is. At that time, the
student can proceed to work on the Final Project.
2. For Spring 2019, students will go to the following IEEE Communications Society website:
https://www.comsoc.org/publications/ctn/ten-communications-technologytrends-2018 and chose one of the 10 areas to do a Final Project in. your
subject will be about (5G Sides Safely off the Hype Curve and Makes a
Nice Boring Landing
We were a little premature with this one, so we will repeat it again this
year. We know of some large deployments this year. In the USA Verizon
is aggressively predicting roll out this year and ATT expects to get there
before years end. A proper worldwide summary of all of this excitement is
an article all by itself. Suffice to say, what is actually meant by 5G is
complicated)
3. Students can do the Final Project either individually, or in a group of up to four total
students. The Final Project proposal for groups of two or more, must clearly indicate the
responsibility of each group member, or it will not be approved.
4. In regard to the Final Project report, I expect a report of at least 4 pages in length using the
standard IEEE double-column format which is available on the IEEE website. The Final
Project report must include figures to illustrate important points; if your figures or tables
are larger than a column width, center them and run them across both columns. Unless
they are original, all material taken from references, including figures, must be credited to
the author of those materials. Final Project reports may be passed through Turnitin to
ensure originality. If videos are used to illustrate an important point, they must be included
with the Final Project on a flash drive and referred to in the report. Videos are encouraged.
If software is used for a simulation study, the program must be included with the Final
Project on a flash drive and referred to in the report. Software simulation is strongly
encouraged. Those students receiving the highest grades will have chosen a specific
topic/application within one of the 10 areas and demonstrated it by a small-scale
simulation.
Below you will find the outline you must follow for the structure of your Final Project
report. You might need to write more or less in the various sections, depending on your
topic, but it should give you an idea as to what is most important and what you should
focus on. It is perfectly fine and strongly encouraged for you to draw your own
conclusions regarding an approach, method and/or algorithm you read about in the
literature and/or future directions you would pursue if you had the opportunity.
1. Introduction: Motivate your project and the topic you have chosen (1 page max).
2. Previous Work: Discuss the literature and advances as they relate to the topic you have
chosen (2-3 pages).
3. The Method: Present the method or methods you have chosen to focus on. This would
normally require some mathematics. Use images to illustrate difficult concepts. Make
sure that the reader understands why you have chosen the particular method or methods,
e.g., the family of methods or techniques may provide a significant degree of security at a
relatively low implementation complexity. Make sure you compare the method or methods
you have selected to other, competing methods or techniques (4-5 pages).
4. The Simulation: If you decide to include a computer simulation in your Final Project,
describe it here. The simulation does not have to be complex; it can be a very simplified
version of the method or techniques that you have decided to focus on. Present a few
simulation results and discuss them. Note the strengths and weaknesses of the method or
methods you have chosen (2-3 pages).
5. Summary & Conclusions: Summary and wrap-up. You may wish to include here any
social or ethical implications of the security method you have selected (1 page max).
6. References: Include a complete list of references:
In regard to references, use the standard format used for IEEE publications.
(Sample )
Cloud Systems Integration
I. Abstract
Cloud System Integration focuses on delivery of
reliable, secure, fault-free, self-sustainable,
infrastructures for hosting internet-based
devices, for data storage applications. Each of
these devices and applications have different
system structures, configurations, and security
access
requirements.
Evaluating
the
requirements for the performance and
infrastructure, such as hardware, software and
services, for different the applications and
device models under alternating data load,
power consumption, buffer delay and system
size, is a challenge we face today in an evergrowing sector. To better explain the process, in
this paper we will approach a new virtual and
theoretical simulation framework that uses
modelling and simulation, to better experiment
on this new integration system. The proposed
simulation framework has the following key
features: (1) multi-tenancy, where different
resources are dynamically allocated and deallocated per demand. The resource allocation
should be elastic, in the sense that it should
change appropriately and quickly with the
demand; (2) self-service and on-demand service
models. It should allow the user to interact with
the cloud to perform tasks like building,
deploying, and managing; (3) availability of
virtualization engine. By creating a virtual,
rather than physical, version of your application
topologies you can move those topologies at will
across clouds and between your data center and
the cloud. (4) guarantees on round-the-clock
availability, adequate resources, performance,
and bandwidth. Any
compromise on these guarantees could prove
fatal for customers; (5) support for modelling
and implementation of large scale Cloud
computing infrastructure, including device
network on a single physical computing node
and virtual machines.
Keywords – Quick Response, Data Security,
Dimensional, Encoding.
II. Introduction
Cloud computing is believed to have been
invented by Joseph Carl Robnett Licklider in the
1960s with his work on ARPANET to connect
people and data from anywhere at any time [1].
It utilizes a highly virtualized storage
infrastructure and is like broader cloud
computing in terms of accessible interfaces,
near-instant elasticity and scalability, multitenancy, and metered resources. Cloud storage
services can be utilized from an off-premises
service or deployed on-premises [2]. Typically
referred to a hosted object storage service, but
the term has broadened to include other types of
data storage that are now available as a service,
like block storage.
Cloud System Integration delivers a
complete infrastructure of services, which are
made available to tend all levels of customer’s
needs. These services in industry are
respectively referred to cloud computing, a
utility that has the potential to power the next
generation data centers by architecting them as a
network of virtual services so that users can
access and deploy files and applications from
anywhere in the world on demand. Internet
services developers and providers are no longer
required to make large and complex hardware
and software infrastructures to deploy their
services to operate it. It also offers significant
benefit to IT companies by freeing them from
the low-level task of setting up basic servers and
software system support.
Some examples of Cloud-based
applications include social networking, web
hosting, virtual content accessibility, and real
time data processing. Each of these application
types has different configuration, and
deployment requirements. Evaluating the
requirements for the performance and
infrastructure, such as hardware, software and
services, for different the applications and
device models under alternating data load,
power consumption, buffer delay and system
size, is a challenge we face today in an evergrowing sector. The use of real test platforms,
limits the experiments to the size-limitation of
the testing platform, and makes the reproduction
of real-world results extremely difficult, and
beyond the control of the tester.
The utilization of new enhanced
simulation tools has open the possibility of
evaluating the code structure prior to software
development in an environment where the
engineer can conduct actual tests. Such
simulations are very important, especially in the
case of Cloud based systems, where access to
sensitive data and currency trade occurs.
Simulation-based testing offer significant
benefits, allowing engineers to test their services
in a controlled and safe environment, and to tune
the performance and reliability of the system
before deploying on real Clouds. At the
developer’s side, simulation environments allow
evaluation of different kinds of data buffing
speeds, data handling protocols and cyber
security scenarios. These testing studies could
aid the developers in optimizing the source
codes, improve speed and reliability. Without
simulation platforms, developers would have to
rely either on theoretical and imprecise
evaluations, or the costly and inefficient trial and
error approaches.
In this paper we will approach a new
virtual and theoretical simulation framework that
uses modelling and simulation, to better
experiment on this new integration system and
its application services.
By using cloud simulations platforms,
researchers and developers can focus on specific
design issues they want to investigate, without
having to reload the whole system to run
individual updates. Our proposed simulation
framework offers the following features: (1)
support for modeling and simulation of large
scale Cloud computing infrastructure, including
device network on a single physical computing
node; and (2) multi-tenancy, where different
resources are dynamically allocated and deallocated per demand. Among its unique
features, there are: (1) self-service and ondemand service models. It should allow the user
to interact with the cloud to perform tasks like
building, deploying, and managing; (2)
availability of virtualization engine. By creating
a virtual, rather than physical, version of your
application topologies you can move those
topologies at will across clouds and between
your data center and the cloud. (3) guarantees on
round-the-clock availability, adequate resources,
performance, and bandwidth. These features
would speed up the development of new
algorithms, methods, and protocols.
III. Cloud computing
A question to ask what does cloud computing
mean? There are many ways to define cloud
computing. Each software defines it in
difference ways, but in the end they all will go
back to the same definition. Cloud
computing[7] is to store and access data using
the internet instead of your own computer’s hard
drive. There are many types of cloud computing,
and each one has its own abilities. Some
examples, Amazon EC2 and Microsoft Azure.
The power of the cloud computing is depending
on the data collections. There are many levels
for the cloud computing data center. There is the
low level which are the storage servers and the
applications servers. In the storage servers, we
will find the storage virtualization management,
and in the applications servers, we will find the
virtual machine monitor. These two are the main
levels to power the data centers. The second
level is the virtual machines such as, windows
and mac with mono. The first level called the
cloud application and it has the networks.
There are some applications that require a higher
level. Such as social networking, business, and
gaming. these applications qualities depend on
the time criticality.
To understand the proposed Cloud
integration system, it is essential to grasp the
process it aims to simplify, cloud computing.
Cloud computing is a dynamic system that
involves the integration and storage of
information via multiple servers, whose
information and services can be succinctly
accessed
by
consumers
through
a
provider/consumer agreement. Example of novel
Cloud computing services include: Microsoft
Azure, Amazon EC2, Google App Engine, and
Aneka. At the lowest level, the data centers of
Cloud computing are fueled by storage and
application servers which allow for higher level
applications like Social networking and gaming
portals.
Grids have been the widely accepted
standard, fulfilling the service needs of compute
and data driven scientific applications. Multiple
Grid
simulators,
including
GridSim[4],
SimGrid[3], and GangSim[5], have been
suggested to promote the facets of Grids. While
the approaches of the three simulators differ, all
are unable to back infrastructure and
application-level requirements. Clouds aim to
rectify these issues, but the Cloud computing
systems, application, and services require further
research and development. The goal of of the
system is to support this research and
development.
The SimJava discrete event simulation
engine is the foundation and provides functions
such as organization of events, communication,
and management of the simulation clock. The
second layer involves the implementation of the
GridSim functions which allow for the modeling
of multiple grid infrastructures. The following
level, CloudSim, advances its predecessor by
expanding upon its central abilities. The User
Code is the highest layer that unveils
functionalities, applications, and so forth. Going
forward, research efforts are needed by both
academia and industry to further the core uses
and to establish best practices regarding cloud
integration.
IV. Grids
In these few years Grids[6] has been going
forward and improving as bases for the high
performance service to deliver the data and
compute applications.
They have been developing many kind
of Grid, and each grid they make it has different
abilities. For example, SimGrid[3] is made to
distribute applications around the Grid, and there
are more types which they simulate toolkit ( help
virtual resources ).
V. Simulation
Figure 5. Signal waveforms
Figure 2. Control system
Figure 3. Information after receiving
Figure 4. Information before transmitting
VI. Design & Implementing
For the Cloud System Integration, there are
various fundamental blocks or classes that need
to be considered.
Data Center. In this block, the center
structure services obtained from providers in the
Cloud system are modeled. This class comprises
a set of homogeneous or heterogeneous hosts in
accordance with their relative configurations,
such as capacity, memory, and storage.
SAN Storage. SAN stands for storage
area network. SAN Storage is commonly used to
store large sets of data. This class utilizes a
simple interface in order to allow the user to
store and retrieve certain data at any given time
relative to the availability of the network.
Virtual Machine. This class is
responsible for the Host component of the
system. A host can at a certain instance represent
several virtual machines and also distribute
cores depending on the predefined sharing
policies. Each virtual machine has the ability to
store important characteristics in the form of
processor, memory, and storage.
Cloudlet. This class represents the
Cloud-based services, such as delivery of
content and social networking. In order to
successfully host the application, every
component must have a pre-defined length and
data transfer amount.
Provisioner. This class represents the
various provisioning policies that the Cloud
System undergoes. This provisionary class
models the bandwidth and memory allocation to
the virtual machines. The function of the
bandwidth provisionary system is to initiate the
allocation of the network bandwidth across the
data center. Thus, the memory allocation
provisionary system provides the actual memory
spaces to the virtual machines. This system is
essential for the execution of the virtual machine
on a host since it determines the amount of free
memory available.
Entities and Threading
This occurs when simjava is built by cloud
system to simulation engine then preserves it
from threading model for stimulation of entities.
An entity which directly extended to the core of
simjava is a programming component. This
entity can send and receive messages through
simjava shared event queue and this occurs
when input and output port associates in the
simulation system. The performance bottleneck
could be due to large number of entities in a
simulation environment. For this to be reduced,
it is best to implement the only core as an
inherited member of the simjava.
The modeling of cloud system helps the
computing machine to moderate the capacity
processes. The java virtual machine needs to
control the three threads despite the number of
hosts simulated. The threads are
user,
datacenter, and broker. The cloudsim like VMs
policies are light in weight and do not complete
the processing power. This VMs progress must
be updated and monitored after every simulation
step. The host level VMs processing triggers
VMs gridlets processing which directs all the
updates to certain task units. The completion
time of computed values is sent to the data
center entity, which is then kept in a queue
queried by a data center after the processing
step. When tasks are completed in the queue,
they are returned to the user.
Communication Among Entities
At the beginning of simulation, all datacenter
entities register with the cloud information
services. This registry provides database
services, mapping the user to suitable cloud
providers. Brokers consult the CIS services
about clouds that offer infrastructure services,
which match the user’s application requirement.
This communication flow relates to the
simulated experience. Message from the broker
to the data center for confirmation about the
execution action or the number of VMs a user
can create.
VII. Test and Evaluation
The tests evaluate the efficiency of cloudsim in
modeling the computing environment. In order
to evaluate the overhead in building a simulated
cloud environment, various experiments were
conducted. The goal of this test is to evaluate
the computing requirement to simulate
infrastructure in that;
I.
The time at which the runtime
environment is directed to the load of
the cloud system program.
II.
The instant at which components are
fully initialized and ready for processing
events.
The growth relative to time grows exponentially
with the amount of machines. Time instantiated
for 100000 machines is about 5 minutes, which
is good according to the scale of the conducted
experiments.
The other test is to quantify the performance of
cloudsim core that is subjected to the user
workloads. The VMs policy was scheduled to
space shared, which suggests that only one host
was allowed at an instant given time so the user
was requested and having to follow constraints.
The application model is designed to have 500
task units and each unit requiring 120000
million instructions to be executed in a host. The
task unit in this diagram required only 300kb for
transfer to the data from the data center.
Task execution included the creation of VMs,
which were later submitted to groups of 50
every 10 minutes. The main aim was to help
allocate task unit to each processing core. With
each task unit having its own core, the incoming
tasks did not in an way affect execution time as
compared to the time hared execution time
where each task unit was affected by incoming
task units in the scheduled tasks. By forming the
group of 50 tasks, time taken to complete was
less, overloading was reduced and in the long
run, ho …
Purchase answer to see full
attachment

Order your essay today and save 10% with the discount code ESSAYHSELP