2018
The 10th CASCON Workshop on Cloud Computing
Marin Litoiu, Joe Wigglesworth
Proceedings of the 28th Annual International Conference on Computer Science and Software Engineering, pp. 362-363, 2018
Abstract
In the last decade we have seen a dramatic change of the IT landscape. Cloud computing has changed the way the applications are developed, deployed and operated. At the same time, new application domains such as Internet of Things (IoT) add new requirements for cloud, such as new cloud architectures, guarantee requirements for performance, privacy and security. IoT are families of technologies, protocols, software and algorithms that enable sensor-embedded-objects such as city infrastructure, buildings, farms, factories, appliances, health and personal accessories to connect to the Internet, push data and pull commands from cloud. The data emitted by sensors-enabled objects can be archived on the cloud, where it is fused and analyzed to make inferences that can be used to adapt the same objects or their environments. IoT technologies are key enablers for building new applications in a variety of ÿ&
DevOps Round-Trip Engineering: Traceability from Dev to Ops and Back Again
Miguel A. Jimenez, Lorena Castaneda, Norha M. Villegas, Gabriel Tamura, Hausi A. Muller, Joe Wigglesworth
International Workshop on Software Engineering Aspects of Continuous Development and New Paradigms of Software Production and Deployment, 73-88, 2018
Abstract traceability, tornado, technical debt, soundness, software engineering, software deployment, round trip engineering, proof of concept, devops, computer science
DevOps engineers follow an iterative and incremental process to develop Deployment and Configuration (D&C) specifications. Such a process likely involves manual bug discovery, inspection, and modifications to the running environment. Failing to update the specifications appropriately leads to technical debt, including configuration drift, snowflake configurations, and erosion across environments. Despite the efforts that DevOps teams put into automating operations work, there is a lack of tools to support the development and maintenance of D&C specifications. In this paper, we propose Tornado, a two-way Continuous Integration (CI) framework (i.e., Dev Open image in new window Ops and Dev Open image in new window Ops) that automatically updates D&C specifications when the corresponding system changes, enabling bi-directional traceability of the modifications. Tornado extends the concept of CI, integrating operations work into development by committing code corresponding to manual modifications. We evaluated Tornado by implementing a proof of concept using Terraform templates, OpenStack and CircleCI, demonstrating its feasibility and soundness.
doi
traceability, tornado, technical debt, soundness, software engineering, software deployment, round trip engineering, proof of concept, devops, computer science
Runtime Performance Management for Cloud Applications with Adaptive Controllers
Cornel Barna, Marin Litoiu, Marios Fokaefs, Mark Shtern, Joe Wigglesworth
Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering, pp. 176-183
Abstract virtual machine, system dynamics, software system, robustness, engineering, controller, control engineering, cloud computing, adaptive system, adaptability
Adaptability is an expected property of modern software systems in order to cope with changes in the environment by self-adjusting their structure and behaviour. Robustness is a crucial component of adaptability and it refers to the ability of the systems to deal with uncertainty, i.e. perturbations or unmodelled system dynamics that can affect the quality of the adaptation. Cost is another important property to ensure that resources are used prudently and frugally, whenever possible. Engineering robust and cost-effective adaptive systems can be accomplished using a control theory approach. In this paper, we show how to implement a model identification adaptive controller (MIAC) using a combination of performance and control models and how such a system satisfies the goals for robustness and cost-effectiveness. The controller we employ is multi-input, meaning that it can issue a variety of commands to adapt the system and multi-output, meaning it can regulate multiple performance indicators simultaneously. We show that such a solution can account for uncertainty and modelling errors and efficiently adapt a web application with multiple tiers of functionality spanning multiple layers of deployment, software and virtual machines, on Amazon EC2, an actual cloud environment.
doi
virtual machine, system dynamics, software system, robustness, engineering, controller, control engineering, cloud computing, adaptive system, adaptability
2016
Cloud Adaptation with Control Theory in Industrial Clouds
Cornel Barna, Marios Fokaefs, Marin Litoiu, Mark Shtern, Joe Wigglesworth
2016 IEEE International Conference on Cloud Engineering Workshop (IC2EW), pp. 231-238
Abstract volatility, software system, software portability, robustness, real time computing, distributed computing, controller, control theory, computer science, cloud testing, cloud computing, autoscaling
The volatility of web software systems, for example due to traffic fluctuations, can be addressed through cloud resource elasticity. Cloud providers offer specific services to automate the process of elasticity, so that application developers can efficiently and effectively manage their cloud resources. Current autoscaling methods mostly employ rule-based or threshold-based techniques. In this work, we discuss a more sophisticated and robust method based on control theory. We present the design for a simple controller and how it can be applied on real cloud environments. We demonstrate the applicability of our controller by deploying it on two cloud environments, one public and one private. Our experiments show that the same controller functions accordingly and maintain the set performance goal in both environments, indicating the potential portability of the controller across clouds.
doi
volatility, software system, software portability, robustness, real time computing, distributed computing, controller, control theory, computer science, cloud testing, cloud computing, autoscaling
Enabling devops for containerized data-intensive applications: an exploratory study
Marios Fokaefs, Cornel Barna, Rodrigo Veleda, Marin Litoiu, Joe Wigglesworth, Radu Mateescu
CASCON 16 Proceedings of the 26th Annual International Conference on Computer Science and Software Engineering, pp. 138-148, 2016
Abstract software engineering, software, popularity, exploratory research, devops, computer science, cloud computing, big data, autoscaling, adaptive system
In an ever-changing landscape of software technology, new development paradigms, novel infrastructure technologies and emerging application domains reveal exciting opportunities, but also unprecedented challenges for developers, practitioners and software engineers. Amongst this innovation, containers as infrastructure support, data-intensive application as a domain and DevOps as a development paradigm have gained significant popularity recently. In this work, we focus on these concepts and present an exploratory study on how to develop such applications, deploy and deliver them in Docker containers and eventually manage them by enabling autoscaling on the container level. In the paper, we detail our experimental process pointing out the problems we encountered along with the solutions we used. Eventually, we present a set of stable experiments to demonstrate the autoscaling capabilities we achieved.
software engineering, software, popularity, exploratory research, devops, computer science, cloud computing, big data, autoscaling, adaptive system
2015
The 7th CASCON workshop on cloud computing
Marin Litoiu, Joe Wigglesworth
CASCON 15 Proceedings of the 25th Annual International Conference on Computer Science and Software Engineering, pp. 292-294, 2015
Abstract workload, software as a service, software, operating system, middleware, ibm, database, computer science, cloud federation, cloud computing, business case
Hybrid clouds are private and public sub-clouds working together to mitigate privacy and security concerns while addressing the need for large computation and storage capacity. Academic research into hybrid clouds has focused on the middleware and abstraction layers for creating, managing, and using hybrid clouds. For example, researchers used the MapReduce paradigm to split a data-intensive workload into mapping tasks sorted by the sensitivity of the data, with the most sensitive data being processed locally and the least sensitive processed in a public cloud. Commercial support for hybrid clouds is growing in response to the business case for cloud federation. IBM offers both PureApplication System (to manage a private cloud) and PureApplication Service (a public cloud offering) and software to bridge the two at the Software-as-a-Service (SaaS) level. More recently, IBM Blue Mix Platform-as-a-Service (PaaS) enables integration of IBM Blue Mix cloud with on-premises private clouds.
workload, software as a service, software, operating system, middleware, ibm, database, computer science, cloud federation, cloud computing, business case
2014
5th workshop on cloud computing
Marin Litoiu, Joe Wigglesworth, Tinny Ng
CASCON 14 Proceedings of 24th Annual International Conference on Computer Science and Software Engineering, pp. 288-289, 2014
Abstract world wide web, strategic business unit, software, implementation, end user, computer science, cloud testing, cloud computing security, cloud computing
The shared computing and communication infrastructure, known as cloud computing, is supporting a growing number of companies to drive their core businesses. The Cloud term characterizes the end-users perspective: it offers services the users access as outsiders (which could be in the form of a computing and communication platform or infrastructure or an application) while being agnostic about the technology underlying it. The implementation details are abstracted away, and the service/computing is consumed as a pay-per-use service and not acquired as an asset. From the service-providers perspective, a number of technologies can be deployed to deliver the end-user experience. When the provider is outside of the end users organization, it is called the public cloud or just the cloud. The same underlying technology can be used to provide similar infrastructure / platforms / software within the organization, perhaps offered by a separate business unit or to take advantage of the benefits while maintaining control; in this case, the term private cloud is used. Separate clouds (separated by technology or management or geography) unified to appear as one are termed federated clouds. When the federated clouds are running different technologies, and in particular do not natively expose same APIs, a more specialized term is a heterogeneous federated cloud. When the clouds being federated are composed of both private and public clouds, the result is a hybrid cloud. Cloud offerings are often classified into three main -as-a-Service (-aaS) categories: Infrastructure-,Platform-, and Software-. Other categories are sometimes used to describe specific implementations of these categories Storage-aaS, Management-aaS, etc.
world wide web, strategic business unit, software, implementation, end user, computer science, cloud testing, cloud computing security, cloud computing
2013
A generic framework for application configuration discovery with pluggable knowledge
Meng, Fan Jing and Zhuo, Xuejun and Yang, Bo and Xu, Jing Min and Jin, Pu and Apte, Ajay and Wigglesworth, Joe
Cloud Computing (CLOUD), 2013 IEEE Sixth International Conference on, pp. 236--243
Abstract
Abstract: Discovering application configurations and dependencies in the existing runtime environment is a critical prerequisite to the success of cloud migration, which attracts many attentions from both researchers and commercial vendors. However, the high complexity
2004
2003
Bridging the digital divide for work and play - a workshop
G.M. Silberman, J.L. Mitchell, M.M. Klawe, F. Liauw, J.P. Wigglesworth, I.R. Posner
International Conference on Information Technology: Research and Education, 2003. Proceedings. ITRE2003., pp. 494-500
Abstract world wide web, public relations, multidisciplinary approach, literacy, key issues, educational technology, digital divide, developing country, computer science, computer literacy, bridging
A multidisciplinary workshop, Bridging the Digital Divide for Work and Play, was held November 3-4, 2001, in Toronto, Ontario, Canada. The meeting attempted to identify the critical areas where research and development are needed to increase literacy in both developed and developing countries, while bridging the digital divide more generally. The paper, containing an account of the discussions at the workshop, addresses another objective of the workshop, i.e., the wide dissemination of its ideas. An important conclusion of the deliberations was that new inventions in hardware, systems, and software were not the key issues. Instead, designers need to rethink their specifications away from fastest, smallest, and leading edge as prime considerations. Trade-offs that emphasize worldwide access, affordability, stability, and simplicity of use can make a significant contribution towards bridging the digital divide. Although not specifically addressed in our conclusions, it is important to recognize the motivations for learning to use new tools and technology. These are different by age group, with children and seniors wanting to "play", while working adults learn best when it has a positive impact on their work.
doi
world wide web, public relations, multidisciplinary approach, literacy, key issues, educational technology, digital divide, developing country, computer science, computer literacy, bridging
2002
Bringing academic research directly to development: IBMs Centres for Advanced Studies
J. Wigglesworth
IEEE International Engineering Management Conference, pp. 866-870, 2002
Abstract technology transfer, research model, new product development, marketing, intellectual property, industrial property, ibm, engineering, context aware services, competitor analysis
To stay ahead of competitors and satisfy their customers, corporations are always seeking ways to speed up the transfer of technology into their products and service offerings. Academic researchers are always seeking ways to show the relevance of their work by applying it to industrial problems. The collaborative research model developed and refined by the IBM Centres for Advanced Studies (CAS) over the past 12 years shows how it is possible to bring corporate product development teams and academic researchers together to their mutual benefit. The CAS model originated in the IBM Toronto Laboratory and has since been replicated at other IBM product development laboratories. CAS operations have been established in the United States at the Austin, Texas, and Raleigh, North Carolina laboratories and at a second Canadian site at the Vancouver Innovation Centre.
doi
technology transfer, research model, new product development, marketing, intellectual property, industrial property, ibm, engineering, context aware services, competitor analysis
2000
Java Programming: Advanced Topics
Joe Wigglesworth, Paula Lumby
Course Technology - Thomson Learning, 2000
1999
Java Programming: Making the Move from C++
Joe Wigglesworth, Paula Lumby
Couse Technology - International Thomson Press, 1999
1993
Surveys as a method for improving the development process
Joe Wigglesworth
CASCON 93 Proceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative research: software engineering - Volume 1, pp. 337-355
Abstract vital signs, operations research, ibm, engineering management, distributed computing, defect prevention, computer science, analogy
Traditionally, project teams wait until the product has been shipped to customers before surveying the project participants for their thoughts on what went right and what went wrong. These surveys are usually called "postmortems" and the cadaver analogy is appropriate: it is too late to do anything to help the patient.A solution is to check the patients vital signs at regular intervals and prescribe treatment as required. This was the approach taken in a pilot study run recently in the Image Systems Center of the IBM PRGS Toronto Laboratory. At predefined project checkpoints, a survey was sent to all participants of the pilot project to solicit their anonymous feedback about what was working well and what needed changing.The questions in the survey were designed to be generic so that the same set of questions could be used for each checkpoint, in order that comparisons could be made between checkpoints. Groups of related questions were created, each having several multiple-choice questions and one free-form text question so that both qualitative and quantitative feedback was obtained. The quantitative results were analyzed for trends, and the qualitative results were examined for evidence of specific process defects. In the later stages of thg pilot study, a follow-up meeting was held so that project members could discuss the identified process defects and suggest action items to avoid the defect in the future.The results of this pilot study were positive, and there is good reason to extend this survey approach to other areas of the Laboratory. It is a natural enhancement to the Defect Prevention Process, and an aid for meeting the corrective action procedures requirement of ISO 9000 registration.
vital signs, operations research, ibm, engineering management, distributed computing, defect prevention, computer science, analogy
1985
Influence of flow contraction on solids removal in a small circular clarifier
Joseph Wigglesworth, P. L. Silveston, R. R. Hudgins
Canadian Journal of Civil Engineering 12(3), 717-719, 1985
Abstract weir, suspended solids, retrofitting, internal wave, geotechnical engineering, flow, engineering, contraction, clarifier, baffle
Tests were conducted on a 1.22m (4ft) diameter clarifier in which a flow contraction baffle had been installed. This baffle accelerated the flow towards the overflow weir at the outer diameter of the tank. Removal rates for suspended solids were improved by 18-20% by the use of flow contraction in comparison with a conventional clarifier of the same size. A partial flow contraction baffle gave similar results. Results suggest that the capacity of large-scale clarifiers might be extended by retrofitting them with flow contraction baffles. Key words: clarifier, flow contraction, internal waves.
doi
weir, suspended solids, retrofitting, internal wave, geotechnical engineering, flow, engineering, contraction, clarifier, baffle