Issue 23 - Today Software Magazine

Page 1

No. 23 • May 2014 • www.todaysoftmag.ro • www.todaysoftmag.com

TSM

T O D A Y S O F T WA R E MAG A Z I NE

Chasing The Perfect Scrum Java 8, lambda expressions and more

Delivering successful mobile apps Big Data and Social Media: The Big Shift How to go beyond APIs Întreţinerea la zi a sistemelor Linux (I) Keeping Your Linux Servers Up To Date Skills over Technology

Template Engines for Java Web Development How to protect a good business idea Effective use-case analysis Code Kata 5 Simple Rules for an Efficient Campaign



6 Cluj IT marks a new step in its development as an innovative cluster Paulina Mitrea and Andrei Kelemen

8 How to Web is launching MVP Academy Irina Scarlat

10 Topconf Bucharest 2014 Chris Frei

11 The JavaScript community will meet on the 3rd of June in Bucharest! Iunieta Sandu

12 The 5th edition of the Agile mammoths ended in Cluj-Napoca! Adina Grigoroiu, CAPM

14 Delivering successful mobile apps Larisa Gota

18 Java 8, lambda expressions and more Silviu Dumitrescu

21 Qt: How I Came To Love C++ Ambrus Oszkár

25 How to go beyond APIs Alpar Torok

28 Keeping Your Linux Servers Up To Date (I) Sorin Pânca

31 Data Modeling in Big Data Silvia Răusanu

39 Code Kata Tudor Trișcă

35 Effective use-case analysis Anita Păcurariu

39 Skills over Technology Alexandru Bolboaca and Adrian Bolboaca

41 Chasing The Perfect Scrum Bogdan Mureșan

43 5 Simple Rules for an Efficient Campaign Ruxandra Tereanu

44 Big Data and Social Media: The Big Shift Diana Ciorba

46 Onyxbeacon iBeacon Management Platform Bogdan Oros

49 For real! Antonia Onaca

51 Template Engines for Java Web Development Dănuț Chindriș

55 How to protect a good business idea Claudia Jelea

56 Gogu and the water bottle Simona Bonghez, Ph.D.


editorial

T

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

he month of April has proved to be, just like in the past years, a propitious month for writing articles. Thus, we are glad to see the increased interest of our collaborators, as well as that of the readers of our magazine. Online, we count over 6.000 readers every month, without taking into consideration the printed magazines distributed within different events and the direct downloads. We are extending also in the area of our release events, as, besides Cluj, we will also be present in Brasov for the first time, and there was a release in Timisoara, a month ago. Moreover, we also plan a new release in Bucharest and another one in Iasi. A subject which is becoming more and more important inclusively for the companies that carry out their activity exclusively in the outsourcing area is represented by the startups. This interest is reflected in the great number of events and programs dedicated to them. But even under these circumstances, the transition towards product development is not very easy, and failure can discourage people. A different approach is the creation of cultural projects or ones that can improve the life of the community. Thus, they contain the elements necessary to a successful commercial product, namely popularity, ease of usage, promotion through the social networks and last but not least, the implementation of the application requirements. I will also provide two examples that are known to the readers of the magazine, Statui de Daci – statuidedaci.ro, by Gemini Solutions, as well as the applications dedicated to the iOS and Android mobile devices of Today Software Magazine. Their implementation was carried out by 3Pillar Global and Gemini Solutions. We invite you to consider this course and we promise to offer you an article dedicated to this trend. The theme of this issue was social media and, from this perspective, we invite you to read Data Modeling in the Big Data Context, which analyzes data denormalization, necessary in the cloud context, as well as Big Data and Social Media, which shows us some statistics on this evolution. I would like to mention some of the technical articles of this issue: Java 8, lambda expressions and more…, where you will discover the latest news of Java language; Qt: How I came to fall in love with C++, a rediscovery of C++ in the QT context; Beyond API or how we can better define APIs, The Daily Maintenance of Linux Systems for those from DevOps; OnyxBeacon iBeacon Platform presents one of the latest technologies used in the communication with the final clients and Template Engines for Java Web Development evaluates the Model View Controller frameworks existent for Java. An optimization of the way in which we learn is suggested in Code Kata and Skills over Technology, and for a better planning of requirements, we invite you to read the article called Effective Use-Case Analysis. The compulsory steps and the important aspects that have to be considered for a good release of a product are suggested in Delivering Successful Mobile Applications and 5 Simple Rules for an Efficient Campaign. In the end, I would like to propose and article written in the Agile Scrum spirit: The Perfect Scrum.

Enjoy your reading !!!

Ovidiu Măţan

Founder of Today Software Magazine

4

no. 23/May | www.todaysoftmag.com


Editorial Staf Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): Marius Mornea marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com Translator: Roxana Elena roxana.elena@todaysoftmag.com

Lista autorilor Andrei Kelemen

Claudia Jelea

Executive director @ IT Cluster

Lawyer @ IP Boutique

Alexandru Bolboaca

Bogdan Mureșan

Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Director of Engineering @ 3Pillar Global

Anita Păcurariu

Ruxandra Tereanu

Business analyst @ Endava

Conversion Analyst @ Betfair

Chris Frei

Sorin Pânca

Organizer @ TopConf

Senior Systems Administrator @ Yardi România

Paulina Mitrea

Larisa Gota

Coordonator of Innovation group @ ClujIT Cluster

QA Engineer @ Small Footprint

andrei.kelemen@clujit.ro

alex.bolboaca@mozaicworks.com

anita.pacurariu@endava.com

claudia.jelea@jlaw.ro

bogdan.muresan@3pillarglobal.com

ruxandra.tereanu@betfair.com

Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com Reviewer: Adrian Lupei adrian.lupei@todaysoftmag.com

frei@topconf.com

sorin.panca@yardi.com

Accountant : Delia Coman delia.coman@todaysoftmag.com Paulina.Mitrea@cs.utcluj.ro

Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207

Tudor Trișcă

Silviu Dumitrescu

Team Lead & Scrum Master @ msg systems România

Line manager @ Accesa

Tudor.Trisca@msg-systems.com

Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

silviu.dumitrescu@accesa.eu

Adina Grigoroiu, CAPM

Ambrus Oszkár

Trainer and consultant @Colors in Projects

Software Engineer @ Accenture

Silvia Răusanu

Alpar Torok

Senior Developer @ ISDC

Functional arhitect @ HP România

Diana Ciorba

Bogdan Oros

Marketing Manager @ Codespring

Co-Founder @Onyx Beacon

adina.grigoroiu@confucius.ro

silvia.rausanu@isdc.eu

Copyright Today Software Magazine

lgota@smallfootprint.com

diana.ciorba@codespring.ro

oszkar.ambrus@accenture.com

alpar-istvan.torok@hp.com

bogdan@onyxbeacon.com

www.todaysoftmag.ro www.todaysoftmag.com

www.todaysoftmag.com | no. 23/May, 2014

5


showcase

Cluj IT marks a new step in its development as an innovative cluster

T

his month has marked a new phase in the efforts of building a relevant identity for the Cluj IT Cluster, but also an opportunity of development on a course of innovation for our association members. The project presented last year by Cluj IT within the operation POSCCE / Op.1.3.3 is in the last phase of signing the financing contract, and it will be implemented for a period of 18 months after signing. It is the result of an effort of preparation of more than a year. One interesting process in this step, which is worth mentioning here, as it points to the maturity of the organizations which are members of our cluster, was the one of selection of ideas of innovative products having a marketable potential that were to be supported through this project. Following the evaluation of ideas of subprojects proposed by companies and universities, in the context of the Innovation group, the concrete initiatives of realization of innovative software products inscribed in the general theme of the project, which will be developed until the final product phase, as well as a number of subprojects, financeable till the design phase (meaning complete software project), for whose continuation, in the sense of implementation and release on the market, other future investments will be attracted. Around each project there are grouped consortia, formed by companies of the cluster as well as Universities, therefore, once we begin the implementation of this project, we will truly test the cooperation potential of our members. Furthermore, the project allows the development of some activities that have as a target the organizational consolidation and the extension of relations among the cluster members. To be more specific, training type actions will be implemented (for the human resource of the cluster companies), partnerships with other entities from outside the cluster will be initiated and developed, by participating in conferences and fairs, advertising and marketing campaigns will be carried out and major events of our organization (such as Cluj Innovation Days, 2015 edition) will be sponsored. A very important fact is that the project will facilitate the assistance and consultancy for the consolidation of some development tendencies of the Cluj IT Cluster, through which we wish to align to the highest excellence standards in Europe. In our article of the 20th issue of the magazine, called “Cluj IT Cluster: Interdisciplinary Innovation and Advanced IT Solutions for an Avant-garde Urban Community”, in the context of presenting the general objectives of the Cluj IT Cluster, we emphasized the fact that, in virtue of the creed stated in the regulations of the Cluj IT Cluster association, our most important goal is to offer innovative IT solutions for the community, based on a collaborative input of know-how and advanced (even avant-garde)

6

no. 23/May, 2014 | www.todaysoftmag.com

skills, gathered from all the knowledge and expertise providing agents, which are so well represented in our city. In this context, we have already announced the preparation of one of our cluster’s major projects, namely “The Innovative Development through Informatization of the Cluj-Napoca Urban Ecosystem”, also known under the name of “Cluj-Napoca: Next Generation Brained City” – project approved for financing by POSCCE/ Op.1.3.3, which, at the moment, as we have also mentioned above, is in the final phase of signing the financing agreement. Through the manner of approach, this project is meant to generate a living area based on the innovative concept of ecological and fully informatized urban type community, known as the “networked ecological city”, which will form the premises necessary for our urban environment to become a harmonious and eco-efficient environment, where the components and specific levels of the activities and features of an intelligent city are harmonized by means of an integrative and exhaustive Informatization. The approach of the central concept of the project begins from the identification of two essential premises, which relate to one another as follows: (1) the present urban (but also rural, in some cases) communities benefit on certain levels from informatics systems that are not correlated nor integrated as a whole, found on different stages of evolution and development, on a large variety of platforms and IT technologies; (2) upon the preservation of the first premise into the current paradigm, the actual development pace of the communities, on all their levels, might lead to chaotic evolutions (contrary to the need of harmonious expansion on all the components). Considering these two premises, in view of eliminating the risk pointed out in the context of the second one, a model of develop ment on layers, meaning inter-correlated and communicating layers, harmonized through coherent, innovative and integrative informatization, constitutes the paradigm already assumed by the design of the project itself, design based on the most advanced concepts of urban development. Still at the level of project conceptualization, we mention the fact that the entire approach is based on the implication of the tool offered by the knowledge triangle into the materialization – as the final target – of the human


TODAY SOFTWARE MAGAZINE deadline. Actually, the entire economic development policy of the European Union is based on two major concepts, namely smart specialization and clusterization, while the tools that are essential to reaching the goals of socio-economic performance are circumscribed in the development of IT&C, of Knowledge triangle – interaction between digitization and of key enabling Health triangle – interaction between the research, education and innovation technologies. We hope that the mental, physical and social health alignment of Romania to these stracommunities’ health triangle, which ensures the development of a tegic development trends, which are also financially sustained, be business and living environment of the “Next Generation Brained continued by actions of direct support of the clusters and the priCity” type. vate environment nowadays. The definition of this new urban concept envisaging a sustainable and efficient economic development for an eco-efficient and http://www.woodsbagot.com/project/langfang-eco-smart-city/ profitable community context, the entire design of the project is http://www.eco-business.com/news/smart-cities-better-world/ http://www.europeanvoice.com/folder/europescities/230.aspx?gclid=CNmHl4OFlboCFY_ structured on seven levels. The subprojects selected to be deve- KtAod3TYATw loped by the different groups of companies and representatives of the universities from the cluster go into these levels, namely: (1) the level of public administration informatization; (2) the level of transportation / traffic networks; (3) the level of health network; (4) the level of utilities infrastructure (starting from the underground level to the level of air networks); (5) the level of educational and cultural network; (6) the level of economic and business entities network; (7) the level of habitat based on the concept of “next generation housing”, meaning intelligent buildings Paulina Mitrea Paulina.Mitrea@cs.utcluj.ro belonging to the “smart city” concept. The final target of the project is perfectly convergent and corCoordonator of Innovation group @ ClujIT Cluster related to another major objective of the Cluj IT Cluster, namely 1 the materialization of the project called “Cluj Innovation City” , having its target defined for a 15-20 years period, project which already benefits from the support of the local authorities (City Andrei Kelemen andrei.kelemen@clujit.ro Hall, County Council, Prefecture, The North-West Regional Development Agency, etc.). And as a reference concept, the entire Executive director @ IT Cluster approach has the target defined at the European and global level, 2 regarding the Eco-Smart City concept, having the year 2020 as a 1 http://www.clujinnovationcity.com/ 2 h t t p : / / s e t i s . e c . e u r o p a . e u / i m p l e m e n t a t i o n / t e c h n o l o g y - r o a d m a p / european-initiative-on-smart-cities

Young spirit Mature organization A shared vision Join our journey! www.fortech.ro

www.todaysoftmag.com | no. 23/May, 2014

7


event

How to Web is launching MVP Academy, a pre-acceleration program for the startups in the region

H

ow to Web is launching MVP Academy, a pre-acceleration program dedicated to Central and Eastern European startups. Between April 30 and May 17, teams that are working on innovative tech products will be able to apply for the program by filling the registration form available online at www.mvpacademy.co.

Central and Eastern European startups are well known for their technical talent, but they don’t always have the necessary business knowledge to transform their ideas into globally successful products. Given this context, How to Web is launching MVP Academy, a pre-acceleration program that will endow entrepreneurs in the region with the necessary knowledge to develop their startup into a globally successful venture, to speed up their growth process and to explore the opportunities in the market. How to Web MVP Academy will take place between June 2nd and July 22nd at TechHub Bucharest and will bring together 14 promising teams from the region. Besides the educational component, the finalist teams will have access, for the entire duration of the program, to co-working space, mentoring, as well as an international tech community and connections with angel investors, acceleration programs and early stage venture funds. Consequently, the How to Web MVP Academy experience will help startups benefit from the opportunity to develop their products into thriving businesses. The program is designed for tech startups (teams or individuals) which have less than 2 years of activity, have developed at least a minimum viable product, haven’t raised more than 50.000 EUR until the moment they joined the program and haven’t participated in any other acceleration program so far. The tech products they are working on must fall in one of the following categories: software, hardware, internet of things, mobile, robotics, biotech, medtech or ecommerce. How to Web MVP Academy is a program developed in partnership with Microsoft, Romtelecom and Cosmote, and supported by Bitdefender, Raiffeisen Bank, Hub:raum and TechHub Bucharest. During the 7 weeks of the program, the 14 teams will attend practical workshops and will acquire key competencies required for building a startup: business concepts, finance, legal aspects, product development, storytelling, marketing, community building, networking or pitching. Moreover, the teams will also benefit from mentorship sessions and will have the chance to talk directly to locally, regionally and internationally apprised specialists and entrepreneurs. The progress of the teams participanting in How to Web MVP Academy will be monitored through a series of key performance indicators, thus being supported in their accelerated growth. Among the benefits that the finalist teams will derive will be access to tech infrastructure offered by international companies (cloud hosting or free access to different platforms), access to the TechHub Bucharest co-working space, community and events, key connections in the industry and ongoing support to develop their startups during the program. How to Web MVP Academy will bring in front of the teams

8

no. 23/May, 2014 | www.todaysoftmag.com

more than 40 mentors having relevant experience and competences for the development stage of the startups: other startup founders, teams that have graduated acceleration program, business and product specialists, as well as early stage investors. Among them there will be Jon Bradford (Managing Director, Techstars), Florin Talpeş (Co-founder & CEO, Bitdefender), Cosmin Ochişor (Business Development Manager, hub:raum), Gabriel Coarnă (Architect, Evernote Clearly), Adrian Gheară (Co-founder NeobyteSolutions, business angel and advisor to various startups), Bobby Voicu and Cristi Badea (Co-Founders of Mavenhut and alumni of Startup Bootcamp accelerator) or Daniela Neumann (Scrum Master/Change Management at idealo internet GmbH). The full list of mentors that will guide the teams during How to Web MVP Academy is available online here http:// mvpacademy.co/mentors/ The program will end on Tuesday, July 22nd, with a Demo Day, an event during which the teams will pitch their products and present their progress in front of a jury comprising angel investors, early stage venture funds, and representatives of some of the most prestigious acceleration programs in Europe. Thus, the startups will have the chance to get additional financing and continue developing their products even after the program ends. The 14 finalist teams will participate for free in How to Web MVP Academy and no equity will be taken from them. The applications can be submitted online at www.mvpacademy.co, until Saturday, May 17 and they will be evaluated by a specialist jury, considering factors such as the team and its experience in the field, the target market’s size, industry trends, the initial traction, the feasibility, and the scalability of the product. The teams accepted into the program will be announced on Friday, May 23rd. The visibility of the event is ensured by Goal Europe, Netokracija, IT Dogadjadi, Digjitale, Entrepreneur.bg, Newtrend. bg, Adevărul Tech, Forbes România, Wall-Street.ro, Business Cover, Manager Express, Business Woman, Market Watch, Ctrl-D, PC World, Computer World, Gadget Trends, Today Software Magazine, Agora, Yoda.ro, Incont.ro, România Liberă, Zelist Monitor, Comunicatedepresa.ro şi Times New Roman.

Irina Scarlat

irina.scarlat@howtoweb.co Co-Founder of Akcees @ How To Web


TODAY SOFTWARE MAGAZINE

www.todaysoftmag.com | no. 23/May, 2014

9


eveniment

Topconf Bucharest 2014

K

International Software Conference, 11 - 12 June 2014 in Bucharest

nowledge management is no longer imaginable without the Internet, with its capacity for storing enormous volumes of data, exchanging information about developments and trends, establishing contacts and creating business relationships. But an intensive exchange of know-how and experiences also calls for personal familiarity, and this is something the Internet on its own cannot deliver. For this reason, specialized technical conferences such as Topconf Bucharest are a key factor when it comes to dynamic know-how transfer across technical, national and cultural borders. Topconf Bucharest is an international conference that serves as the no. 1 meeting place for software developers and IT specialists, where they can exchange information about latest developments, current trends and important aspects of all software technologies. Topconf Bucharest provides the ideal surroundings and atmosphere for developers, architects, IT decision-makers and consultants from all over Europe, as well as the rest of the world, to establish and firm up contacts with one another. The variety of topics at Topconf Bucharest 2014 is as broad as the technologies, ranging from Security to Mobile, Agile to Project Management and Product Development & Management. Topconf Bucharest 2014 paves the way for the exchange of experiences across and beyond all borders. In order for specialists from all over the world to participate in such an exchange, they need to have a common Meeting and Networking point at their disposal. Topconf Bucharest is the place tobe for this form of global communication, and English is Topconf Bucharest’s official conference language. Here’s why Topconf Bucharest will be the center of the Software world in June 2014: • 1-day pre-conference with practice-related tutorial program, • 2-days main conference with over leading speakers on all topics relating to Software development. 4 technical parellel tracks on both days, • Exhibition facilities for platforms, tools and further education programs, • Official networking events and activities, • Effective communication platform for exhibitors and sponsors from the national and international, • Software community.

Topconf Bucharest 2014 at a glance Event: International IT conference with Romanian focus.

Focus Development technologies and Project Management / Agile / Product Management

for participants, speakers, exhibitors and media

Location: Bucharest, Romania – Ramada Parc.

Participants: IT specialists from Romania and neighboring countries, Europe and overseas.

Special features: • The first and only large-scale event of its kind in Romania, • Broad variety of topics, leading speakers from the global Software community, • Exhibition platform, • Supplementary program (social event) in line with special, • interests of the participants

Subiecte • Security • Testing • Mobile • Project Management • Agile & Lean • Cool and upcoming • Product Development&Management • Cloud computing • Customer experience

Sponsors Qualitance, Bucharest, Romania; Nortal, Romania and Estonia; Microsoft, USA; Nokia, Finland.

Organized by Topconf OU, Tallinn, Estonia topconf.com

Key dates: 11 - 12 June 2014 Conference Days 10 June : Tutorials 13 June : Windows Azure Workshop

Website: http:⁄⁄topconf.com⁄bucharest-2014⁄

10

nr. 23/Mai, 2014 | www.todaysoftmag.ro

Chris Frei

frei@topconf.com Organizer @ TopConf


event

TODAY SOFTWARE MAGAZINE

The international and local JavaScript community will meet on the 3rd of June in Bucharest!

T

he first event about webdesign, organized by Evensys last year, SmartWeb Conference, received a lot of appreciation from the local and from international practitioners from countries such as Germany, Bulgaria and Hungary.

The positive feedback from the web community has determined us to launch the first edition of JSCamp, which will offer the new trends in web design and front-end development. The first edition of JSCamp Romania will take place on June 3, 2014 and will bring together the international and local experts in JavaScript, who will share the secrets of the „most popular web programming language.” JSCamp Romania will include four sessions of intensive conferences about web development trends, case studies and international experiences, open web technologies, methodologies and advanced tools. This year we will enjoy the presence of two of the best experts in JavaScript, Robert Nyman, Technical Evangelist, Mozilla and Vince Allen, Software Engineering Manager, Spotify. Robert Nyman is a Technical Evangelist for Mozilla and the Editor of Mozilla Hacks. He’s a strong believer in HTML5 and the Open Web, has been working since 1999 with Front End development for the web - in Sweden and in New York City. He also regularly blogs at http://robertnyman. com, tweets as @robertnyman and loves to travel and meet people. His speech will cover different thoughts and experiences that we go through when developing, and try to make us think about our approach and technology choices. Vince Allen is a software engineering manager at Spotify in New York City. He’s been designing and programming for almost 20 years and devotes most of his spare time to FloraJS, a JavaScript framework for creating natural simulations in a web browser. In his speech he will present how to use JavaScript as a creative tool to

build Braitenberg Vehicles and other natural simulations in a web browser. Other speakers confirmed at the event are: Martin Kleppe, Co-founder and Head of Development in the company Ubilab, a company that develops applications based on Google Maps API; Sebastian Golasch, Specialist Senior Manager Software Developer at Deutsche Telekom, over time developed applications with JavaScript, PHP, Ruby, and JavaScript now part of the community; Phil Hawksworth, R / G, JavaScript developer that specializes in developing websites since the late ‚90s; Patrick H. Lauke, Accessibility Consultant, The Paciello Group; Tero Parviainen, Independent Software Specialist with over 12 years of experience and Kenneth Auchenberg, Technical Lead at Citrix GoToMeeting Free at Citrix.

Through this event organized for the local and international web developers, we want to offer new opportunities and collaboration. That is why JSCamp Romania will be about the latest trends in web development, networking and international know-how.

Iunieta Sandu

iunieta.sandu@evensys.ro PR & Marketing Coordinator @ Evensys

www.todaysoftmag.com | no. 23/May, 2014

11


event

O

The 5th edition of the Agile mammoths ended in Cluj-Napoca!

n April 4th-5th, Colors in Projects, in partnership with Today Software Magazine, went fo the third time to meet Cluj with the event ...even mammoths can be Agile.Apart from previous editions, this time we had two parallel streams and World Café sessions. Participants had the possibility to choose what presentations to attend based on their topics of interest. In the first stream, the presentations were focused on the ‚hard’ part of Agile, related to Information an Techology, and in the second the area of ‚soft skills’ was addressed. In the World Café sessions of both streams, the participants were invited to debate on different topics with the event speakers, who moderated the six groups formed. Guided by our moderators, Dan Radoiu and Adina Grigoroiu, the over 200 participants had the opportunity to discuss with local and international speakers, each sharing its own experience. We had three international speakers: Michael Nir, Andrea Provaglio and Reiner Kuehn. Michael Nir showed us some of the secrets of a successful Product Owner (among them the powers of Business Analysis), its vital role in the organizational development as well as his roadmap on its way to excellence. Andrea Provaglio told us about the major cultural changes encountered in knowledge based projects, and also about the strategies and mental models required to meet high levels of uncertainty. We also talked about ephemeralization, uncertainty, knowledge and the need for restructuring our wetware in the present context. Reiner Kuehn explained to us how and why it is important to use slack in Agile project management, what impact has this on innovation and motivation, as well as the challenges of using slack in an environment where resource usage is a main KPI and managers measure progress by the amount of delivered features. George Anghelache & Cristian Cazan delighted us with a different presentation: by imitating famous characters, Luke Skywalker and Darth Vader, they fought with lightsabers in the challenge to change the IT galaxy one Agile Transformation at a time. They also staged various situations faced by a Scrum Master, offering each of these solutions to improve communication and activities. We found out from Ruxandra Banici that the method selected to lead a project (Scrum or another) must be continuously adjusted to the needs of the project, while

12

trying, at the same time, to change the context so that it supports the basic method. All by following a clear set of principles. Oana Oros highlighted, by analogy with examples from everyday life, a few communication gaps between Agile actors, hidden gaps that can occur on several levels such as: analysis, development, testing, leadership, management... Andrei Doibani talked about how the adoption of Agile methodologies works in highly regulated industries and also about hybrid approaches such as Water-Scrum-Fall. Izabella Paun & Danielle PopescuAbrudan showed us how business behind the scenes works, how a a Delivery Manager and a Project Manager work together, and how the Agile approach helped them build success in a highly complex structure. Adrian Lupei told us about the Kanban method. He explained how, by implementing Kanban in Bitdefender, work processes were improved, visibility increased and teams started to collaborate better. Dan Berteanu challenged us to debate: what are the criteria we use to make our decisions when they have ethical implications? By bringing into our attention two of the most famous ethical experiments - the Trolley and the Prisoner problems - Dan has managed to create a cheerful and full of humor atmosphere. Calin Damian told us about the importance of choosing the right tools in adapting to the new changes with regard to computing and storage requirements that organizations are facing, having a major impact on software development teams, quality assurance and support. Simona Bonghez talked about decisions in projects and their consequences on projects evolution. We learned from

nr. 23/Mai, 2014 | www.todaysoftmag.ro

Simona which are key decision points in the life cycle of a project, which are the decision models we can rely on, and which are the biases that influence the decision making process of a project manager. We ended the event with many gifts and surprises and we all went to cocktail, where The Glass Fish band delighted us with very good music. We had many colorful balloons, big smiles from everyone present and lots of good cheer! The next day two workshops were held: Conflict Management delivered by Simona Bonghez and Kanban in practice delivered by Adrian Lupei, both exceeding the maximum number of participants. We will gladly return next year for the fourth time in Cluj, it has already become a tradition. We invite you to access images from the event on our Facebook/ Colors in Projects page or by accessing the link http://goo.gl/kz06Ob. We are welcomeing you at our next events on October 10th-11th 2014, in Iasi and next spring in Cluj!

Adina Grigoroiu, CAPM adina.grigoroiu@confucius.ro Trainer and consultant @Colors in Projects


communities

TODAY SOFTWARE MAGAZINE

T

IT Communities

he month of May and the beginning of June is the period when the most important events of the first part of the year usually take place. In Cluj, we recommend the Catalyst CC contest for programmers and the first edition of Techsylvania, but we do not forget the classical by now: IT Camp and the Romanian Testing Conference. In Bucharest, we recommend I T.A.K.E. Unconference, JS Camp and TopConf and we are waiting for those in Brasov to a first event of releasing the magazine in their city. If you haven’t made plans yet, we invite you to consider the suggestions below.

Calendar Transylvania Java User Group Community dedicated to Java technology Website: www.transylvania-jug.org Since: 15.05.2008 / Members: 576 / Events: 45 TSM community Community built around Today Software Magazine. Website: www.facebook.com/todaysoftmag Since: 06.02.2012 /Members: 1434/Events: 19 Cluj.rb Community dedicated to Ruby technology Website: www.meetup.com/cluj-rb Since: 25.08.2010 / Members: 176 / Events: 40 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: www.agileworks.ro Since: 04.10.2010 / Members: 428 / Events: 63 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: www.meetup.com/Cluj-Semantic-WEB Since: 08.05.2010 / Members: 178/ Events: 26 Romanian Association for Better Software Community dedicated to senior IT people Website: www.rabs.ro Since: 10.02.2011 / Members: 238/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: tabaradetestare.ro Since: 15.01.2012 / Members: 303/ Events: 28

May 12 (Cluj) Launch of issue 23/may of Today Software Magazine www.facebook.com/todaysoftmag May 15-16 (Cluj) Romanian Testing Conference www.romaniatesting.ro May 16 (Cluj) Catalysts Coding Contest contest.catalysts.cc May 17-24 (Cluj) Starcelerate cluj.startcelerate.com May 22-23 (Cluj) IT Camp www.itcamp.ro May 29-30 (București) I T.A.K.E. Unconference 2014.itakeunconf.com May 30 (Brașov) Launch of issue 23/may of Today Software Magazine www.facebook.com/todaysoftmag May 31 (Cluj) Techsylvania hackaton - TSM recomandation techsylvania.co June 2 (Cluj) Techsylvania conference techsylvania.co June 3 (București) JS Camp www.jscamp.ro June 10-13 (București) TopConf 2014 topconf.com/bucharest-2014/

www.todaysoftmag.com | no. 23/May, 2014

13


programming

Delivering successful mobile apps

I

’ve been wondering lately what it means to create/deliver a successful mobile app. Competition in the app business today is so fierce, you have to develop/deliver no less than the possible best and most fluid app ever to stay in the game. I’ve been a SQA engineer in the mobile world for 4 years now, so for me delivering great apps is really important. Since you won’t find a recipe for success in just one article I’m going to share with you my findings and personal experiences, to help you build your best app yet. Larisa Gota

lgota@smallfootprint.com QA Engineer @ Small Footprint

There’s really no app out there popular on web that has not been ported to mobile. Since 2011 when the mobile platform hit critical mass is constantly growing. Just to give you an example, last year Facebook mobile daily active users exceeded desktop daily active users1 and since this is old news I won’t “dwell” on this subject any longer. You must be a “believer” by now.

What is a “successful app”?

For me, as a user, a successful app means it’s useful, delightful, easy to use and it’s highly ranked in app store. As a QA, success means delivering an app that has a delightful design, has a good performance (smooth, fast) and is bug free (as much as possible). Creating useful and innovative mobile apps isn’t rocket science, but there’s really no way I can tell you what app you should build to be successful, earn a lot of money, get famous, etc. Both really useful apps and those created for people who just want to ”pass time”, can find success and hit thousands of downloads. There are lots of examples out there of people who wanted success and got it, but there are also people who develop for a hobby or beginner 1 h t t p : / / w w w . b u s i n e s s i n s i d e r . c o m / facebook-mobile-bigger-than-desktop-2013-1

14

no. 23/May | www.todaysoftmag.com

developers that want to prove to themselves that they can indeed build an app. An example of a very popular app is “Angry Birds”, which got to be a phenomenon. Today you can find this app on all platforms (mobile or web), movies, cartoons, toys, etc. Also very popular today (people spending hours playing) are the games to help the user ”pass time”, like ”Candy Crush Saga”, a game that some people are still obsessed with. There are stories with happy endings for the developers of games like ”Threes”, “Luckiest Wheel” or the popular ”2048”, people who never anticipated the success of their apps, but there are also stories of people who couldn’t handle the pressure. An example would be “Flappy Bird” app and if you haven’t played the original game then too bad. The developer (Dong Nguyen) removed it from App Store and Google play, stating that his life got “too complicated”. He claimed to have earned 50.000$/day just from in-app ads. Intentional or not, Nguyen’s announcement of the removal of the game has turned into what could possibly be the most genius act of marketing in the history of the app market, because the number of downloads exploded after his post on Twitter. You can find a bunch of copycat apps in all platforms

www.todaysoftmag.ro | nr. 21/Martie, 2014


management

TODAY SOFTWARE MAGAZINE

though, trying to achieve the success of experience: native “Flappy Bird”. applications and the rich user experience Where do you start? associated with them. If you are a developer / QA working on There’s a huge a mobile project, I’m assuming you already amount of money have the details of the app, your mockups, at st a ke in t heir requirements and the platforms for which respective app stores, you need to run the app are decided, so you so both Apple and already have a starting point. Otherwise Google are no sloufollow just these pretty simple principles to ches in ensuring that get started: their mobile ope• A mobile app must solve a problem. rating systems are Either it’s a really useful app or it’s just a updated to be comgame that will help the user pass time. patible with the latest • Focus on one thing and do it well. and greatest features Be clear about what your app will do; on the market. Here you should be able to sum it up as a again native apps win „one-liner”. out: they will be able • Match your own skills and interests to take advantage of to an everyday problem, and solve it OS updates and innobetter, or with a difference. There are a lot vations quickly, in of applications out there. Don’t waste the ways that are simply effort on cloning something that’s alre- impossible for web ady been done. That doesn’t mean you apps. can’t improve an existing app; but you should always aim to find a new take on What platform a problem, with a solution that’s better, to build for first? or more fun, or adds a new dimension. Choose your first In other words, always bring something platform wisely! fresh to your solution. When it comes to building native Go native apps, building for Facebook app went native, LinkedIn all mobile platforms app went native. Should your app go 100% altogether is not a native also? Yes. I know developing a native good idea and in fact app can seem like a daunting task, but from you better build for my experience it’s totally worth it. When the one mobile platform comparison is made, the perceived benefits first. Believe it or not, of developing an HTML5 hybrid app are choosing the first vastly outweighed by the real benefits of the mobile platform to native app experience. The most important build your app on has factors, monetization, performance, user everything to do with the end user’s behaof paying Android users or you have a experience, security, are all skewed heavily vior and little to do with each platform’s plan of distributing your app within an in favor of native apps. So, this shift toward ability. By now, both iOS and Android have organization native apps is not a trend that one can reached their own levels of being remarka• You’re not convinced that your app afford to ignore. Part of what fuels this rise ble and each appeal to their own type of will pass the requirements that Apple ask is what makes mobile computing a unique users. If you can’t decide between them, for adding your app on App Store then let me help you out: • You cannot afford to build for iOS iOS often wins over Android as the first (you need a Mac, a dev account just to option: get started) • Android’s ecosystem is fragmented, so the development implies more work All that of course if you build your own • iOS users seem to be more engaging app, but if it’s just a project you work on, • iOS users are more likely to pay for well then everything’s decided for you, but the apps still, there are some good points that you don’t want to ignore and that may help you But, there are reasons when Android and your customer: http://www.teqarazzi.com/wp-content/ makes a better first option: • If it’s up to you, start building for the uploads/2013/05/apple-vs-android-vs-windows.png • When you have identified a niche platform you are most accustomed to, www.todaysoftmag.com | no. 23/May, 2014

15


programming Delivering successful mobile apps one that you know best • Follow their guidelines and principles and you should end up with a pretty solid app • Start with the mockups, requirements, etc. but know that those may change when actually developing the app and it’s perfectly normal, adapting to the latest designs makes sense • Do your researches first, find out what’s new out there, because all platforms thrive to improve with great new features. Don’t be afraid to use them as they may become a key for your problem • Always find the best solution from the users’ point of view, because in the end they will be the ones to use the app extensively QA is involved more and more in the requirements review, project estimations, BI analytics, so they really need to be up to date with what’s new out there. If you are a QA be ready to express your opinions, concerns, ideas, but also always be prepared to justify your input.

Don’t mimic elements from other platforms!

One of the hardest parts (or one that has been a real challenge for me) is “porting” the app from one platform to another. No matter how tempting it is to make the app look and feel similar on all platforms, remember that each of them has its own properties and features, so you may not be able to keep everything the way you want. Don’t try to make the apps look alike on all platforms; don’t try to mimic elements that you find in one platform to another. Not only that it is a bad user experience, but in the end you may need to make so many compromises that you’ll end up recreating the app from scratch, because fixing an issue will take longer than redoing everything. This means losing time, money, etc. For example the “ActionBar” in Android is not going to be successfully replaced by the “Application Bar” in Windows Phone or the “Navigation Bar” in iOS. So it’s not enough just identifying the correspondent elements from one platform to another. The whole structure needs to be redesigned when building the app for another platform. It’s really unlikely that a user may use the app on multiple platforms, and once it’s used to the flow of the platform, he will expect the app to behave in a certain way. Be flexible and keep the app flexible, so by

16

the time the app actually goes live, you can add whatever feature may be needed or that has been released in the meantime. Once you started developing the app, don’t stall. Build it, test it, put it in market place, otherwise you risk that all you’ve done so far will become outdated and all the hard work you put into it, needs “adjustments” because it’s no longer compliant to the guidelines. I experienced something similar with an in-house app that got some work done whenever a developer was available. The development took more than a year, the principles, guidelines changed a lot, plus the updates to the OSs changed “the game”, so the overall work was way greater than it would have been if the app has been developed and put on market as soon as possible. Platforms give updates constantly and you do have to update the app accordingly whether it is still under development or already in app market. While you try to build the next platform app, the one on the market will give you an idea of what to do next. So you can rely on your users/customers to give you feedback and find out where your app stands.

Don’t add elements meant to help “novice users”

app more engaging, because ultimately this is a cycle (the more effort you put into it, the more rates you get). Here are some items discussed on Google I/O 2013, but can also be applied no matter the platform. First of all the app metadata is important: • The title: clear and creative (which will help a lot for SEO) • The description: put the goal first, as concise as possible. Assure that on smaller screens it’s above the fold • Screenshots: express your best features • Video previews: could be the most convincing feature (if screenshots may not be enough) Think about targeted users, if you have any. Their reviews will help you in charts. You don’t want to be pooled down in ratings with negative reviews from people who are not even targeted, so try to exclude that category if your app is not addressed to everyone. There are 5 things you can do to improve the process of making your app discovered2: • Ensure helpful web anchors • Make the package smaller (you don’t want your users to uninstall your app because they don’t have enough space to install another) • Avoid common mistakes: like in titles - Google Play autocorrects the user when a typo is found in search bar, so your app may be missed by the user for this reason • Create a viral loop for your app – like social leaderboards; your app may become more popular at least in your list of friends

You need to make a difference between the novice users of the platform and those of your app. You need to have the end user in mind and try to cover his needs. For example the “Back” gesture is a pillar for navigation in Android and Windows Phone, so you can navigate to a previous screen, close a virtual keyboard and in WP8 get to the previous app lastly opened. So adding buttons for navigation just because it may not seem intuitive enough may not be the best idea. We need to believe the app will be used by sharp people, and even “A recipe” for a successful app novice users of a particular platform will From my point of view, as a QA engiget the hang of your app as they get used neer, a successful app means building a to the platform. robust app, that performs great, is delightful to use and gets a lot of long installs, How do you get your app discovered? good reviews and rates. Intended of not, This section is addressed to developers all those apps mentioned in the beginning as well as QAs. A lot of the times QAs are of the article have something in common: involved in a project 100%, from specs to they follow the Kano Model3. shipment, so knowing a few things about In the figure below you can see the bothow to make your app more “discoverable” tom curve which shows capabilities that will help you and your client. Some of the are expected by the customer, but aren’t features you may not have control over, like necessarily a selling feature, meaning you a platform showing results of a search in may have a great idea, but you’re limiting market by the reviews and recommendati- your users with some basic functionality. ons, by ratings, by relation and cross-sell or 2 h t t p : / / w w w . b u s i n e s s i n s i d e r . c o m / by trending apps, but there are a few ways facebook-mobile-bigger-than-desktop-2013-1 3 h t t p : / / w w w . s h m u l a . c o m / to influence those numbers and make your customer-experience-kano-basics-and-shiny-objects/2208/

no. 23/May, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

Fig 1. Kano model

This way you won’t get noticed; providing only the necessary will get you as far as the lower right corner, leaving your users indifferent. Why should the users get your app instead of another similar one, right?

The middle c u r v e – Performance/ Linear – are capabilities that have a linear relationship to customer satisfaction. In other words, the more you provide, the happier your customers will be. Flappy Bird (the original) was so popular because it had something that the other copies out there are missing: a good performance and unflinchingly difficult levels. Not even the apps that are believed to be the ones that the developer got his inspiration from were that good (like “Piou Piou vs. Cactus”).

The top curve shows capabilities that are unexpected by the customer. Their presence can improve customer satisfaction greatly (take “Angry Birds” for example), but not having them will not be a deal breaker for the customer (“2048”). Knowing which features and capabilities meet your customers basic needs, which features will excite them, and which features they are indifferent to will help “what” you decide to focus your resources on. So as long as you try to reach the upper right corner, you‘ll be fine. Of course, as seen, luck has a lot to do with the success of an app, but remember that if you don’t buy a lottery ticket, you will never win the big prize; so get started today.

www.todaysoftmag.com | no. 23/May, 2014

17


programming

management

Java 8, lambda expressions and more

I

t’s spring… the time of changes and hope… Oracle contributes to all that with a new version of the standard Java platform. I am talking about version 8, released in March 2014.

Silviu Dumitrescu

silviu.dumitrescu@accesa.eu Line manager @ Accesa

18

no. 23/May | www.todaysoftmag.com

Beginning with the current issue of Today Software Magazine, I wish to change the type of articles that I write. I cannot say I abandon the idea of book reviews, which represent an important way of putting forth some valuable books from the IT library, but I will also add some articles of a higher technical degree. By doing this, I hope the readers of the magazine will be incited to discover what is new and performant in the world of software applications development. I am very glad this issue is launched in Brasov as well, and, though it is the first launching here, I hope it to be followed by many others. Brasov has an enormous potential, and I love this city very much. Java SE8 is considered to be revolutionary, due to some of the newly introduced features. Initially scheduled for September 2013, the release was postponed to March 2014. The reasons are manifold, but they are basically related to debugging and security improvement, especially on the client side, having JavaFX as the main reason. There are many changes and additions in the language, but probably the most spectacular one is the introduction of lambda capabilities. They are seen as an important benefit in parallel programming. Actually, the efforts for increasing the performance in parallel programming were seen even from version 7, the introduction of the Fork-Join framework is only an example.

In the first part of the article I will focus mainly on lambda expressions; in the final part I will discuss a bit about a brand-new JavaScript engine, and in the following articles I will talk about other Java SE8 related topics. A lambda function (anonymous function) is a function that is defined and called without being connected to an identifier. Lambda functions are a form of nested functions, meaning that they allow access to the variables from the domain of the function where they are contained. The anonymous functions were introduced by Alonzo Church in 1936, in his theory on the lambda calculations1. In the programming languages, the anonymous functions were implemented in the year 1958, as part of the Lisp language. In some Aspect Oriented Languages such as Java, there are similar concepts, such as the anonymous classes. It is only in the 8th version of Java language that the anonymous functions are added. Other languages, such as C#, JavaScript, Perl, Python, Ruby had been offering support for this concept for a long time. Lambda expressions allow us to create instances of classes with one single method in a much more compact manner. A lambda expression is made of: • a list of formal parameters, separated by 1 http://en.wikipedia.org/wiki/Lambda_calculus


TODAY SOFTWARE MAGAZINE a comma and maybe contained within round brackets • the directional arrow -> • a body consisting of an expression or a set of instructions. A functional interface is any interface containing only one abstract method. For this reason, we can omit the name of the method when we are implementing the interface and we can eliminate the usage of anonymous classes. Instead, we will have lambda expressions. A functional interface is annotated with @ FunctionalInterface. In order to understand the way of working with lambda expressions, I have built a little example in which I have created collections of objects sorted out according to different criteria. The implementation of the Comparator interface was done in an anonymous class, by using lambda expressions. The implementation with lambda expressions was possible because in version 8, Comparator is annotated with @FunctionalInterface. The basic item of the collection is the Product class, which is a POJO class with getters and setters. The class contains two anonymous implementations of the comparator, determining the ascending or decreasing classification of the collection items. package model; import java.util.Comparator; public class Product { private String name; private int price; public String getName() { return name; } public void setName(String name) { this.name = name; }

public String toString() { return “Product [name=” + name + “, price=” + price + “]”; } public static Comparator<Product> ascendingPrice = (p1, p2) -> { return p1.getPrice() - p2.getPrice(); }; public static Comparator<Product> descendingPrice = (p1, p2) -> { return p2.getPrice() - p1.getPrice(); } }

The test class will bring something in addition, compared to a class known until version 8. The collection processing will not be done with a classical foreach. As a part of the Collections API, we have the new java.util.stream API, which offers support for functional operations on streams of items. In our example, we will use a basic interface of this API, namely Consumer, which represents a procedure that accepts only one entrance argument and does not return anything. With Consumer, we will be able to use lambda expressions: import java.util.Set; import java.util.TreeSet; import java.util.function.Consumer; import model.Product; public class TestLambda { public static void processProducts( Set<Product> products, Consumer<Product> block) { for (Product p : products) { block.accept(p); } } public static void main(String[] args) { Product p1 = new Product(); p1.setName(“onion”); p1.setPrice(10); Product p2 = new Product(); p2.setName(“tomato”); p2.setPrice(20);

public int getPrice() { return price; } public void setPrice(int price) { this.price = price; } public void printProduct() { System.out.println(this.toString()); } @Override

Set<Product> ascendingPriceProducts = new TreeSet<>( Product.ascendingPrice); ascendingPriceProducts.add(p1); ascendingPriceProducts.add(p2);

www.todaysoftmag.com | no. 23/May, 2014

19


programming Java 8, lambda expressions and more System.out.println(“In ascending order:”); processProducts(ascendingPriceProducts, p -> p.printProduct()); Set<Product> descendingPriceProducts = new TreeSet<>(Product.descendingPrice);

}

public class EvalScript { public static void main(String[] args) throws Exception { // create a script engine manager

descendingPriceProducts.add(p1); descendingPriceProducts.add(p2);

ScriptEngineManager factory = new ScriptEngineManager();

System.out.println(“\nIn descending order:”); processProducts(descendingPriceProducts, p -> p.printProduct());

// create a Nashorn script engine ScriptEngine engine = factory. getEngineByName(“nashorn”);

}

As a consequence of using the stream API, the procedures carried out on a collection can be much more complex than the ones illustrated in the example, namely: filtering by a selection predicate, mapping the filtered object, respectively carrying out an action on every mapped object. I have only presented the last operation. They are called aggregate operations. I would like to make an observation on the previous code: the implementation of the comparator stands for overwriting of the equals() function, fact that can be proved by the alteration, in the code, at equal values of the price. Besides the lambda expressions, an important feature, obviously together with the syntactical changes and the introduction of new APIs, is in my opinion the development of the JavaScript Nashorn engine. Through it, one can integrate JavaScript scripts into the classical Java code. This engine is based on the ECMAScript 262 standard. It is an engine written completely from scratch, in view of increasing the performance. Thus, it is completely different from the already existing engine, Rhino. I will only provide a short example of using this engine, with the promise that in the future I will present more details:

20

import javax.script.*;

no. 23/May, 2014 | www.todaysoftmag.com

// evaluate JavaScript statement try { engine.eval(“print(‘Hello, World!’);”); } catch (final ScriptException se) { se.printStackTrace(); } } }

By running this example, we will get Hello, World! on the console. As a last observation, I have used Eclipse Kepler for editing and from Marketplace I brought Eclipse Java 8 Support (for Kepler SR2) JDT, PDE 1.0.0. This is only till Eclipse Luna is released (probably in May). As a Java version, I used jdk 1.8.0_05. I hope to have aroused your interest in Java 8 and, as usually, I am expecting the discussions with those who are interested. My contact is: Silviu.Dumitrescu@accesa.eu.


management

programming

Qt: How I Came To Love C++

A

Ambrus Oszkár

oszkar.ambrus@accenture.com Software Engineer @ Accenture

fter so many nerves lost over all the quirks and twists and turns of C++ with OpenGL and MFC, working with C# or Java was like a breath of fresh air. But on a day of Irish rain and wind, in a dim cubicle in a corner of an office, along came Qt and a wonderful C++ world opened up, with new revelations ever since. After moving over to Qt, working with C++ became a joy again, and it is one of the environments I most dearly work in. Development is quite an experience, everything remaining native, fast and easily maintainable in the same time.

Why Qt?

, flexibility comes at a great price in complexity overhead and a plethora of pointer-related problems, and it is not exactly multi-platform. On the other hand, for application software one needs a suitable framework that integrates well with the language philosophy and is also practical to work with. Qt, with its constructs that complement language deficiencies and its truly crossplatform framework that covers practically all necessities, solves both of these problems providing for a very enjoyable and clean development. This is how Qt has come to be the de-facto standard C++ development framework. Qt is a comprehensive framework for UI and application development. It has a consistent and clean API. It supplies a robust object communication mechanism. It is truly platform independent but still manages to provide all the necessary low-level features and the native UI behaviors of the operating systems it supports. It also provides a set of development tools for rapid application development (RAD) that can replace or integrate with major IDEs such as Visual Studio or Eclipse. It is thoroughly and cleanly

documented, has a large and strong community behind it and it is widely adopted. You should use it.

A Comprehensive Library

Qt brings a collection of libraries to provide for all the basic needs in application development. Of course, as is the case with any other framework, Qt does not attend to all the whims of all developers of creation, but it covers practically everything one would expect from a general framework. Qt has a set a set of libraries called QtCore, covering, as the name implies, a set of core functionalities. Without providing an exclusive enumeration, this core package includes the following: • an application environment with event loops and and an event system • a GUI system that is one of the strongest assets of Qt • templated containers such as lists, queues, vector and maps. These provide a perfect alternative to STL containers, providing mechanisms for Java-style or STL-style iteration and conversion to the STL counterparts • resource management classes

www.todaysoftmag.com | no. 23/May, 2014

21


programming Qt: How I Came To Love C++ • a model-view system for decoupling data and its views, The minimal C++ class declaration: based on a set of abstract classes Counter • a thread framework, which has been around and much class { appreciated for quite a while public: Counter() { m_value = 0; } • string processing and regular expressions int value() const { • OS functionalities such as file processing, printing, return m_value; } networking capabilities and system settings. void setValue(int value); Additionally, there are several other packages, most of them being mature and feature-rich in their respective areas. These areas include XML processing, multimedia capabilities, SQL database support, unit testing, OpenGL, WebKit integration, internationalization libraries and tools, SVG manipulation, D-Bus communication, ActiveX controls, and others.

Signals and Slots: a Novel Object Communication Mechanism

Qt provides some language additions through macros that are processed by the Meta-object Compiler (MOC) code generator tool. This results in a more advanced introspection support and the possibility of having certain special mechanisms within Qt-enabled classes. The main such special mechanism is the signals and slots approach of object communication. Signals and slots provide a better alternative to callbacks, by being loosely coupled and type safe. This means that, when using signals-slots connections, objects don’t know about any objects connected to them, and type safety is guaranteed by limitations on connections based on the compatibility between the arguments of the signals and slots to be connected. Signals are emitted when an event needs to be announced. Signals are basically bodiless functions. Slots are regular functions that are called in response to a signal. The connection is set up through a QObject::connect() call, which, allows only those signals and slots to be connected that have compatible arguments, thus assuring type safety. Objects are unaware of the signals their slots subscribe to or the slots that are connected to their signals, assuring a loose coupling of objects. Below we provide an example of how a “normal” C++ class can be extended to support signals and slots and how that behavior is realized.

22

no. 23/May, 2014 | www.todaysoftmag.com

private: int m_value; };

Augmentation with Qt-specific constructs: #include <QObject> class Counter : public QObject { Q_OBJECT public: Counter() { m_value = 0; }

The Qt-enabled version has the same internal state but now support signals and slots, having been augmented with the following: • The Q_OBJECT macro, which is mandatory for all classes that want to use signals and slots • Inheritance from QObject, which is also necessary for signals and slots and for many of the features Qt offers, such as dynamic properties, hierarchical object management, better run-time introspection, etc. • The signals macro, which it defines a signal that can be used to notify the outside world of changes, and • The public slots macro, which declares a slot that can be used to send signals to. These special macros are picked up by the MOC code generator tool that generates the appropriate classes and functions that realize all the Qt-specific mechanisms. Slots need to be implemented by the application programmer. Below is a possible implementation of the Counter::setValue() slot: Counter::setValue(): void Counter::setValue(int value) {


TODAY SOFTWARE MAGAZINE

}

if (value != m_value) { m_value = value; emit valueChanged(value); }

You can see the emit statement, which emits a valueChanged() signal with the new value as an argument, whenever a new value is assigned. Below we show an example of a code snippet, by using two Counter objects, a and b, where the valueChanged() signal of a is connected to the setValue() slot of the object b: Counter a, b; QObject::connect(&a, &Counter::valueChanged, &b, &Counter::setValue); a.setValue(12); // a.value() == 12, b.value() == 12 b.setValue(48); // a.value() == 12, b.value() == 48

Notice the behavior of the signals and slots mechanism: • Calling a.setValue(12) results in a emitting the valueChanged(12) signal, which will trigger the setValue() slot of b, that is call that function • This results in b emitting the valueChanged() signal as well, but nothing happens afterwards, since it has not been connected to any other slots

When list2[100] is read, it will actually be the exact same You can see a graphical depiction of how connections are memory location list1[100], since no write operation has been being formed between various objects, as a result of connect() performed, and the large list has not been copied. function calls:

Resource Management

Two mechanisms of resource management in Qt are ownership hierarchies and implicit sharing. An ownership hierarchy consists in an object tree which handles the destruction of descendants. Whenever a new QObjectbased object is created on the heap (using new), it can be assigned to have a QObject parent, which will result in a hierarchical object tree in the end. Whenever an object is destroyed from the tree, all its descendants are destroyed as well. In the following code snippet obj2 will be destroyed during the destruction of obj1 (please note that the creation of obj2 does not use a copy-constructor, since copy-constructors are private in QObject). QObject *obj1 = new QObject(); QObject *obj2 = new QObject(obj1); //this also sets obj1 as the parent of obj2 delete obj1;

Truly Cross-platform and Native

Qt is truly cross-platform. This means a single codebase and a single development stream for any number of supported platforms. One only needs to set different build configurations to deploy it to a different system. This was one of the major reasons for choosing it in a large industrial project we have worked on, since the application needed to be supported on Linux-based embedded systems, as well as Windows PCs. Major operating systems supported are Microsoft Windows, Linux and Mac OS X. Other systems supported include Android, Solaris, HP UX, and others. The executables generated are truly native. Qt uses native GUI elements and the low-level APIs of the platforms it supports, as shown in the image below. For developers, it provides a platform-independent encapsulation of the local system and its functionalities, such as file processing, networking or printing. Below you can see a simplified image of the Qt architecture involving some of the platforms supported:

Implicit sharing refers to an internal mechanism mostly used for Qt containers and large objects. It implements the copy-onwrite approach, i.e. it only passes a pointer to the data structure upon every assignment and it copies the full data structure only in case there is a write operation. This makes it sure that no unnecessary copies are created of large data structures, thus improving performance. In the code below suppose that list1 is a very large list. When list2 is created, there will be no copy made of list1, only a pointer is passed internally, handled by Qt in the background: A Great API and Good Documentation The API of Qt is not unlike most of the well-known API // suppose list1 is a very large list of type frameworks. It is intuitive, robust and convenient. After a slight QList<QString> QList<QString> list2 = list1; acquaintance, one can easily find their way around: things are usuQString str1 = list2[100]; ally named as one would expect them to be named and located where one would expect them to be located. www.todaysoftmag.com | no. 23/May, 2014

23


programming Qt: How I Came To Love C++ The API documentation is relevant and simple, not overencumbered. Sometimes it does take a while to find what one is looking for, and sometimes a bit more information would come in handy. But, overall, it is thorough and extensive with relevant examples, neatly organized and visually appealing, being in line and occasionally even better than the best documentation systems around, such as MSDN or the Java API docs.

Licensing: Open Source and Commercial Qt provides several types of licenses, the major two being LGPL and Commercial. LGPL can be used for dynamic linking, i.e. simply using the features of Qt through the DLLs. The Commercial license is intended for those who change the Qt libraries themselves and don’t want to make those changes public.

Reliable and Robust Qt is not a young framework. It’s been around for practically two decades, being widely employed in both personal and industrial settings. It’s been the basis for KDE, the second most popular Linux desktop environment, for more than 15 years, proving stability through millions of users. It’s been used by a wide variety of industrial players and areas, maturing to a level where it can be relied upon to be stable, mature and robust. Qt also provides bindings for a number of languages beyond C++, such as Python, Java, C#, Ruby, and others. Thus, Qt is so appreciated, that the need arose to use it from other languages as well.

Out-of-the-box Tools There is a set of tools that is provided along with Qt leveraging the development process. These include the following: • Qt Creator, a simple IDE that supports different build configurations and has strong support for debugging Qt-enabled code, • Qt Designer, a graphical GUI editor that can be integrated into other IDEs as well, • Qt Linguist, a manager for internationalized texts that can be easily handled within Qt, and • Qt Assistant, a Help application that includes the Qt API documentation, but it can also be integrated within own applications to provide a Help system. There are also some command line tools for compiling Qt projects, generating Qt-specific support classes or compiling UI files. Visual Studio integration is provided with occasional limitations, but excepting some extra steps that need to be done manually, it is mostly integrated, with debugging being functional as well.

And Much More Qt has grown to be a platform for both desktop and mobile development, in settings ranging from personal use to industrial or embedded software. As such, it accommodates many needs, all of which cannot possibly be contained within the scope of this article. Nevertheless, for the sake of completeness, we enumerate some further areas where Qt can prove to be an appropriate solution: • Declarative UIs: the QtQuick framework and the QML language provide a declarative way of building dynamic user interfaces with fluid transitions and effects, being aimed

24

no. 23/May, 2014 | www.todaysoftmag.com

especially for mobile devices. Logic for the UIs described this way can be written either in JavaScript or in native C++/Qt code. • Scripting: QtScript provides a scripting framework, with the possibility to expose parts of the application to the scripting environment, including the signals-slots mechanism. • Type introspection and better dynamic casting: through using QObject-based types, run-time type introspection is enabled, as well as a quicker dynamic casting. • A mechanism for dynamic properties is provided, where properties can be set and read from objects by a string-based name, without them ever having been declared within their classes. An example is shown below: QObject obj1; //adds a new property to obj1 obj1.setProperty(”new property”, 2.5); //d == 2.5 double d = obj1.propety(”new property”);

• Smart pointers and resource management is also provided, even beyond what C++11 introduced in the meantime. • Resource bundling is provided to supply resources, such as image files, in a binary package.

This Is Not a Sales Pitch

No doubt, there are downsides to Qt. When developing exclusively for the Windows platform, a .NET-based solution might come in handier for some. Qt has grown to be quite a large framework, so it might be a little hard to get into. Also, there are occasionally problems with database drivers and threading, and you will stumble upon various problems when using it extensively, as with any framework. But these are exceptions that highlight the otherwise great aspects. And of course, the advantages outweigh the disadvantages by a mile and you can practically always find your way around problems with the help of some forums and by delving into the API docs. Sadly, I’ve been away from Qt for quite a few months now. But once I needed to check something about the MVC pattern in general, and I thought that the description of Qt’s models and views might shed some light on matters. And indeed, going back to a quick glimpse on items and views in the Qt API documentation was not only absolutely useful for even non-Qt-related things, but filled me with nostalgic joy and longing after those times when I delved into its wonderfully clean and intriguing world. So without any other motivation than honest appreciation of the framework, I can heartily recommend it as probably the actual best C++ application framework out there. Asking around people who have worked with it, you will probably find quite similar perspectives to that of the author of this article. For, if you get a chance to explore and use Qt, you’re most definitely in for an adventure that you will dearly look back to long after you’ve trod new ground.


programming

How to go beyond APIs

I

f you don’t have APIs you don’t exist. Well, to be fair, it would be more correct to say that without APIs you don’t exist in the cloud, but then again, can you exist outside the cloud nowadays? You must be thinking: nice compilation of buzzwords to start an article. Thank you! I hope the rest of the article will be equally compelling or less repulsive, depending on your taste for buzzwords. Alpar Torok

alpar-istvan.torok@hp.com Functional arhitect @ HP România

Jokes aside, would you invest in a start-up that ignores the cloud and doesn’t have APIs? I know I wouldn’t. Software giants agree. I also have to admit, I didn’t came up with the idea. I first heard it from Mac Devine, CTO of Cloud Services at IBM. He stood in front of an audience at the CloudOpen conference last year, talking about how to create a cloud-first company, and he was constantly making this point. If you don’t have APIs, others cannot consume data from you and cannot interact with you; it’s as if you don’t exist. We also hear a lot about the new style of IT, and the power of developers. It’s a great time to be a developer, really. All this decision power is shifting, and developers get to have much more to say in the technology that gets used. It has gone so far, that it’s even infecting enterprise environments now. Some organizations have already taken this to extreme heights allowing developers to make changes and push them to production many times a day as they please. What does that really mean? It’s actually really simple. You have to have good APIs. Really good APIs. The kind of APIs that others just can’t resist using. Of course, it helps if your service is valuable as well, but no matter how valuable it is, how accurate

and reliable, if the API is hard to develop against, the developers with all the decision power will decide not to use it, or worse, will not even notice it. Are you scared yet? You should be. That lengthy documentation you spend so much time writing and maintaining will not help. Developers are notoriously lazy beings, to the extent that they consider it their virtue. That’s just another example of their powers. They will not read all your documentation, in fact they are likely to read the first paragraph at most, or just the headings, and expect that they can try it out right away, and understand it right away. And if they can’t, they will move on, because they are brilliant, experienced, empowered developers and if they can’t understand your API in five minutes, you are doing something wrong. And you know what? They’re right! Why shouldn’t it be like that? After all, you are competing for their attention with other providers. APIs are by no means new. They have been around from the early 2000s, and even before that, if you consider not only web APIs. Do you really think of anything other than REST when you think of APIs? Not a lot of people do any more. And the reason is that these were disruptive. Their simplicity won developers over. Everyone started

www.todaysoftmag.com | no. 23/May, 2014

25


programming How to go beyond APIs using them, and selecting them over more complex alternatives. Wouldn’t it be nice if you could keep an edge, and anticipate the „next REST”?

More than an API: noAPI

It’s really catchy to innovate simply by prefixing a technology with „no”. Is SQL a technology of the past? NoSQL to the rescue! It’s still like SQL, but different. So, why not noAPI? Why not take APIs with their best features, but make it even easier to consume. Offer something that still makes it possible to manipulate and consume data and services, but is much easier for a developer to pick up. It’s not an API, it’s a noAPI. Most of the time we no longer care about the specifics of the API; you don’t implement everything. We have language bindings. That’s an improvement, but is it enough? Everybody prefers to use them, they make sense, but at the end of the day language bindings are no much more than code reuse. Code reuse is great, but for ease of use, we can do better. The solution actually exists, and has existed for some time now; in fact you might already know it and used it without thinking about it. Domain Specific Languages (DSL) are a great fit for this. They are considered to be a form of API in the traditional sense. DSL is a term that we use to describe a programming language designed with a specific problem in mind, as opposed to the general purpose programming languages we are used to. Its aim is to closely resemble the domain, making it simpler to think about and solve problems in that domain. Regular expressions are a domain specific language. We use them because it’s much easier than to implement the same functionality in some general purpose language. The more complex the pattern matching that needs to be done, the easier it becomes not to implement something custom. Because the language is close to the domain, it’s easy to use. Let’s consider an example of offering a DSL instead of an API. Suppose we want to spin up a VM in a virtual environment. Just a bare VM, no templates, no OS install. The implementation for that could look like this: #!java // [...] backingFile = new VMDiskBackingFile(„/path/to/ some/file/in/data/store”) bakingFile.setThinProvisioned(true)

26

no. 23/May, 2014 | www.todaysoftmag.com

disk = new Disk(backingFile) disk.setParavirtalization(true) disk.setSizeGB(16) VMService.createDisk(disk) vm = new VirtualMachine() vm.setRamGB(2) vm.setDisk(disk) VMService.createVM(vm) vm = VMService.getVM(vm.getName()) VMService.startVM(vm) // [...]

Some observations are in order. First, you might say that the provided API that we are using could be designed better. It most certainly can be improved. The reality however is that it’s common to see APIs like these and it’s not easy to improve them, while keeping the modular fast and generic. Second, there are a lot of things for the developer to remember. Create the file before the disk and the disk before the VM. Oh and I forgot to mention, you need to get information about the VM back before you can power it on. I don’t trust myself to remember all these, so what can we do? Consider this: #! vm { disk : 2.GB, backigFile „/path/to/some/file/ in/data/store” { thinProvisioned }, ram: 2.GB, power: on }

Suppose you have multiple providers for your virtual infrastructure: the first one with the API in the first example; the second provider, with the DSL in the second one. Which one would you prefer? Did you note the most important difference? With a DSL the developer doesn’t have to focus on the „how”, only on the „what”. That is a great relief; it freezes up the mind, and allows it to focus much better. It’s as if it’s reading your mind. If you think it’s a lot of effort, and isn’t worth the effort, stay tuned for the third and final part of the article. It’s easier than you think.

Modern times, modern tools, quick implementations

Implementing a DSL might seem intimidating at first, so let’s briefly look at a simple way to implement it.


TODAY SOFTWARE MAGAZINE The JVM is home for many programming languages besides Java. Some argue that it’s even more important than the language itself. Groovy is one of these languages. It’s a dynamic scripting language with Java compatible syntax, called so probably only because JavaScript was taken. It’s low learning curve, most Java code is Groovy compatible, but not all Groovy code is Java compatible. And that’s where it gets weird. It can be so different that Java programmers will not even recognize it. In fact it can be so exotic, that it’s even hard to recognize that it’s Groovy. In all fairness, Groovy is a terrible programming language for most things. It will let you shoot yourself in one foot, and reload for the other. It is however really good for implementing DSLs. They say on their website that they support DSLs, but it’s almost like the whole language was built for this sole purpose. The good thing is that it integrates seamlessly with Java, so you can always implement part of what you want in Java if you want to, and use Groovy only for the DSL. The scope of the Groovy meta programming model is beyond the scope of this article. It does have a learning curve, but it’s not excessively difficult, once one gets a hold of it. The good part is that it’s productive to work with; it does most of the work, so the learning curve pays off. An added bonus is that the DSL becomes a subset of Groovy. What this means is that you can always mix Groovy in the DSL should you want to. You get data and control structures in the DSL for free. There are some great projects out there that implement DSLs with Groovy. One of my favorites is Gradle, which implements a built DSL. It’s fairly complex but a very good example for an implementation.

At a high level, the DSL is used to create a model of the domain in the initial configuration phase when the DSL is executed. Then the domain specific work is executed only after the model is complete. This allows the execution to reason about what needs to be done, knowing everything that needs to be done. In the earlier example, this would mean that the service knows the complete layout and specifics of the VM before it starts to be created. There are a lot of other great and fast ways to implement DSLs. Most dynamic programming languages can be used to some extent. It’s no longer necessary to start implementing a DSL by implementing a parser. So next time an API is needed, will you consider noAPI?

www.todaysoftmag.com | no. 23/May, 2014

27


programming

Keeping Your Linux Servers Up To Date Part I

I

n this article, which is split into multiple parts, I’m going to address the issue of keeping your Linux servers up to date. More often than not, when starting to administer the servers of a startup business, the systems administration team finds an already created datacenter setup around a bunch of servers installed with the needed software, which is often a chaotically configured, “barely functioning” setup done by the first team of developers, doing DevOps jobs. Sorin Pânca

sorin.panca@yardi.com Senior Systems Administrator @ Yardi România

28

no. 23/May | www.todaysoftmag.com

This situation gets in the way when one wants to keep the production or even the development systems updated and it leads to long hours of work. Often, many sysadmins find systems with big uptimes that weren’t updated for years. From the business point of view, as long as the systems are up and running, providing their services, it does not matter if they are up to date or not. Only when something bad happens - like a password database leak or a security compromise that leads to stolen valuable data, or when the developers find their programs need new versions of software installed that are incompatible with the production operating systems’ versions - does the management team rush the systems administrators to do upgrades a.s.a.p. This rush further causes poorly tested solutions to reach the production environment. To simplify and streamline the update process without buying new hardware, we approached the problem in two steps: first we virtualized the application layer by using Linux Containers (LxC). This type of virtualization enabled us to create

“virtual machines” without losing performance or overloading the servers with the hardware simulation layer, as is the case with VMWare or RedHat’s KVM or Oracle VirtualBox or XEN; the second step was to “virtualize” the hardware nodes by using a partition image file as the root filesystem. The upside of the “root filesystem in a file” approach is that we are able to change the root filesystem with a new one only by rebooting the machine and we are able to reverse the change fast, by doing another reboot. Also, having a file as root filesystem, the operating system installation from inside can be upgraded and tested out-of-band and then published to all servers automatically. Then, when any of those servers needs to be upgraded, it only has to be rebooted. This type of approach is already common in the embedded world where vendors publish “image files” to be uploaded to routers, out-of-band management boards (DRAC, iLO, AMT), etc. In my setup, I used a source based Linux distribution - Gentoo Linux. One may ask “Why would anyone want to compile everything from sources when you


TODAY SOFTWARE MAGAZINE can easily install binary packages?” When installing a server, a sysadmin often finds himself compiling software from source for various reasons: outdated packages, custom build features, patches to be applied, etc. Couple that with constant system updates and he finds himself “really working to earn his money”. Also, he often finds that not only one package needs to be compiled from sources but many of its dependencies, as well. On a binary distribution, one runs into multiple problems when compiling from source, like the well-known “dependency hell” or replacing system components (python, perl) with unsupported versions. So I better build everything from source using automation tools - the Gentoo’s package manager, portage (which resembles FreeBSD’s ports).

THE SETUP

• a clustered storage system needs to run on all hardware nodes to provide fault tolerant and distributed storage; we will not use OpenStack’s solutions (as OpenStack is not supported yet on Gentoo) and we will use XtreemFS; each node will act as all three components of the XtreemFS - directory server (DIR), metadata and replica catalog (MRC) and object storage device (OSD); • the cluster storage will be accessed under the /warehouse directory on all hosts and interested guest systems; • the root image file will be named “hn-root.img”; • the updated root image file will be named “hn-root-new. img”; • the old root image file will be named “hn-root-old.img”; • if a file called “system-revert” is present, the system will rename the current hn-root.img to hn-root-broken.img and hn-root-old.img to hn-root.img and boot the old root image; • all root filesystem image processing fu? happens in the initrd boot phase; • a custom init script that runs early in the boot process initializes the new raw installation found in the image file (which can be viewed as a “hardware node class” in an object oriented way) to be instantiated with the specifics of each host - which is viewed as an “object” of the “hardware node class” - (hostname, IP address, network configuration, ssh keys, storage cluster node identities, puppet ID, etc.); the script also takes care of synchronizing this configuration with a common state repository hosted on the clustered filesystem.

In this first part of the article, I’ll talk about the host system, not the virtualized containers and virtual machines. When designing the host system root image, also known as a “hardware node”’s root image file, we found that it needs to satisfy some requirements: • it only has to provide an environment to run LxC and KVM (and at some point in the future, OpenStack - not currently supported outside Ubuntu and RedHat) and Docker; we chose to use KVM to run some Windows Server instances; • the partitioning scheme should contain two partitions: an EFI partition to keep the boot loader - Grub v2, which is capable of accessing partition images directly after it creates a loop device; and a second one which will contain at least a partition image file from where the system can boot up; • the EFI partition will be mounted under the /boot/loader Below is a diagram of the system’s host storage device partitidirectory; ons and the root image file: • the host storage system will use GPT as a partitioning method instead of MBR, to allow the usage of a host storage system (RAID arrays) bigger than 2 TB; • on the host partition (the one holding the root image file) all other data will be stored inside a “data” directory; • the host partition will be mounted under the /hostpart directory The initrd phase of the boot process diagram • the partitions will need to be labeled and mounted by calling them using their labels: the EFI partition will be labeGenkernel is a script from the Gentoo project that helps led “EFI-BOOT” and the host and data partition will be labeled Linux users compile their kernels; it should also work outside of “hostpart”; /data0 will be a symlink pointing to /hostpart/data the Gentoo distribution (if you want a binary distribution that is

Objective C

jobs-cluj@yardi.com Yardi Romania

www.todaysoftmag.com | no. 23/May, 2014

29


programming Keeping Your Linux Servers Up To Date (I) the kernel’s /dev/loop0 device; 5. Recompile the kernel using genkernel; if you only want to generate a compatible initramfs, you can do that using the following command genkernel initramfs, instead of genkernel --menuconfig all; 6. In the /boot directory, the file “kernel” should be a symlink to the actual kernel file and the file “initramfs” should be a symlink to the actual initramfs file; 7. Loop mount the partition image file and in /boot/loader/ grub/grub.cfg, create a new menu entry (my partition image file is formated as reiserfs, so I added the reiserfs module; you’ll need to load the ext2 module if you have an ext2, ext3 or ext4 image file): menuentry ‘GNU/Linux in a file’ { insmod part_gpt insmod fat insmod reiserfs insmod ext2 insmod gzio set root=’hd0,gpt2’ loopback loop ($root)/hn-root.img echo “Loading Linux…” linux (loop)/boot/kernel root=/dev/ram0 real_root=/hn-root.img raw_loop_root_host_ partition=LABEL=hostpart ro }

echo “Loading initial ramdisk…” initrd (loop)/initramfs

8. Edit /etc/fstab, remove all entries and add the following entries: granted to allow genkernel to work, take a look at a Gentoo derivative: Sabayon Linux or Calculate Linux). To be able to boot from the image partition, I modified the genkernel’s default/linuxrc file and added the logic described in the above diagram. You can clone the modified genkernel from github at https://github.com/psihozefir/genkernel.git. Also, the init script that cleans up the hn-root.img file of unneeded files and settings will be available soon. In the meantime, you can do it manually. This script is needed in order for you to be able to put the image file on multiple servers without causing confusion on your datacenter network and management applications (like Nagios or Icinga, or nodes` administration panels).

LABEL=EFI-BOOT /boot/loader vfat noauto,noatime 1 2 LABEL=hostpart /hostpart auto noatime 0 1

9. Dismantle the file and copy it to the destination server; install grub on its host storage drive: grub2-install --target=x86_64efi --boot-directory=/boot/loader --efi-directory=/boot/loader; you’ll need to use chroot from a live USB Linux (System Rescue CD, for example) in order to be able to install grub, but that is beyond the scope of this article. Another advantage for using image files is that it makes switching Linux distributions really easy. Just put an hn-root-new.img containing another Linux distribution and you’re done!

In my next part of this article I’ll talk about the XtreemFS cluster storage and in the 3rd part, I’ll talk about virtualization of To install a system into a root partition image, the following systems (using LxC and KVM) and applications (using Docker). steps should be followed (this procedure will wipe the storage When OpenStack will become available in one of our images, drive clean, so backup everything before continuing): I’ll also write the fourth part, where I detail how we will use 1. Using parted, create a new GPT label if you didn’t parti- OpenStack. tion your storage using GPT yet; this step will destroy all the data stored on the storage device; This setup is currently work in progress and some parts are not 2. Create a very small partition to host the Grub2 boot even developed yet. So, stay tuned! loader (128 MB), format it as FAT32 (or FAT16 if the mkfs tool complains about the size of the allocation table) and label it EFIBOOT (notice the case); 3. Create another partition that fills up the rest of the storage space and format it with any Linux FS (BTRFS is recommended); 4. On a separate machine (can be a workstation), create a 15 GB partition (can be bigger or smaller, as you see fit; a bigger partition will take longer to copy to all servers and a smaller one may fill up more quickly) and install a Linux distribution of your choice; once the installation is done, create an image of that partition using the dd command; note that the /boot directory is inside this partition, so the kernel and the initramfs will be accessed by grub after setting up a “grub2 loop device” which is different than

30

no. 23/May, 2014 | www.todaysoftmag.com


programming

Data Modeling in Big Data

W

Silvia Răusanu

silvia.rausanu@isdc.eu Senior Developer @ ISDC

hen someone says “data modeling”, everyone thinks automatically to relational databases, to the process of normalizing the data, to the 3rd normal form etc.; and that is a good practice, it also means that the semesters studying databases paid off and affected your way of thinking and working with data. However, ever since college, things have changed, we do not hear so much about relational databases, although they are still used with predominance in applications. Nowadays, “big data” is in trend, but it is also a situation when more and more applications need to handle: the volume, the velocity, the variety, and the complexity of the data (according to Gartner’s definition). In this article I am going to approach the dualism of the concepts of normalizing and denormalizing in the big data context, taking into account my experience with MarkLogic (a platform for big data applications).

About normalization

Data normalization is part of the process of data modeling for creating an application. Most of the time, normalization is a good practice for at least two reasons: it frees your data of integrity issues on alteration tasks (inserts, updates, deletes), it avoids bias towards any query model. In the article “Denormalizing Your Way to Speed and Profit”1, appears a very interesting comparison between data modeling and philosophy: Descartes’s principle – widely accepted (initially) – of mind and body separation looks an awful like the normalization process – separation of data; Descartes’s error was to separate (philosophically) two parts which were always together. In the same way, after the normalization, the data needs to be brought back together for the sake of the application; the data, which was initially altogether, have been fragmented and now it needs to be recoupled once again – it seems rather redundant, but it is the most used approach from the last decades, especially when working with relational databases.

1 Todd Hoff, Scaling Secret #2: Denormalizing Your Way to Speed and Profit, http://highscalability.com/ scaling-secret-2-denormalizing-your-way-speed-and-profit

Moreover, even the lexicon and the etymology sustain this practice – the fragmented data are considered “normal”.

About denormalization

When it comes to data modeling in the big data context (especially MarkLogic), there is no universally recognized form in which you must fit the data, on the contrary, the schema concept is no longer applied. However, the support offered by the big data platforms for unstructured data must not be confused with the lack of need for data modeling. The raw data must be analyzed from a different point of view in this context, more precisely, from the point of view of the application needs, making in this way the used database application-oriented?. If it were to notice the most frequent operation – read – it may be said that any application is a search application; this is why the modeling process needs to consider the entities which are logically handled (upon the search is made), something like: articles, user information, car specification, etc. While the normalization breaks the raw data for protocol sake, without considering the functional needs, denormalization is done only for serving the application – of course, with care, excessive denormalization can cause more damage than solutions. The order of the steps in which an application using normalized data is developed seems to respect the waterfall methodology: once the model is established, work starts on the query models and no matter the obtained performance, adjustments are done on the query, or on the database indexes, but never on the model. Having a denormalized database, the relationship between the data model and the query models describe better the agile

www.todaysoftmag.com | no. 23/May, 2014

31


programming Data Modeling in Big Data

methodology: if the functional and nonfunctional requirements are not covered, then changes are made to affect the data as well in order to improve the query performance, until the required result is obtained. All the arguments which made the normalization so famous, still stand, but big data platforms have developed tools to keep the data integrity and to overcome other problems. The systems for big data are easier to scale for high volumes of data (both horizontally and vertically), which makes the problem of excessive volume generated by denormalization to simply go away; moreover, the extra volume helps improving the overall performance of the searches. The solution to the integrity problem depends on the chosen architecture, but also on the master of the data.

Solving integrity issues on denormalization

When data denormalization is chosen, it is clear that the chosen solution is an application-oriented data center, but this represents only the data source with which the application directly communicates, not the original source of the data (or the master of data). For the big data systems, there are two options: either they live only in the big data database, either the data has as original source a relational database and using an extract-transform-load (ETL) tool, the data arrives in the big data “warehouse”. Having this two options, the possible integrity issues are handled correspondingly. If the data exists only in the big data system, it is required an instrument for

32

synchronization and integration of the data which was altered. The tools to implement map-reduce and the most often used as they proved to be efficient and they run on commodity hardware. Such sync processes can be triggered as soon as the original changes was applied – when the changes are not too often and there is no possibility of generating a dead-lock; when the changes are more often, it is recommended to use a job running on an established time table. When the original data are located in a relational database, the effort of maintaining the integrity of data is sustained by the original storage system – which is expected to be normalized. In such situation, you need to invest a lot in the ETL tool to restore the logical structure of the data. Even if the freedom offered by this tool is large, applications needto respect a certain standard of performance and reliability, thus the new changes must reach as soon as possible the big data system; therefore, the risk of excessive denormalization exists, greatly reducing the computational effort on the big data platform.

identifier of the newspaper and the one in the entitlement and the entitlement period to encapsulate the date of the article. Why denormalization is unsuitable for this scenario? The model for the column needs to contain denormalized information about all the users who are allowed to access it – this would represent a pollution of the column entity, but also extra computational effort on the ETL or map-reduce side and this would result in a degradation of the value of the application; moreover, changes occurring on an entitlement period for a certain user can alter millions of columns and this would trigger a process of reconstructing the consistency for the entitlements…eventually.

Conclusion

In the big data context, the best option for data modeling is denormalization – modern applications need high responsiveness and it does not worth to waste (execution) time to put back together the normalized data in order to offer to the user the logic entities. Of course, complete denormalization is not the best option for encapsulating a big many-to-many, as I Denormalization and joins have shown in the previous paragraph. To Having all this evangelization from finish in a funny note, according to the title the above for denormalization, it seems of the article2: “normalization is for sissies”, senseless to touch the subject of “joins”; and denormalization is the solution. denormalization is a solution to avoid large 2 Pat Helland, Normalization is for Sissies, http://blogs. scale joins – we are in a big data context msdn.com/b/pathelland/archive/2007/07/23/normalization-is-forsissies.aspx after all. However, quality attributes, multiple data sources and external protocol compliance can radically reduce the options for modeling/denormalizing. Let’s take a concrete example, the business model for periodic entitlements for the columns in a newspaper; let’s also add the dimension of the model to handle: 45 million of articles, and 9 billion of column-user relations. Each user can purchase entitlements to certain newspapers on a time basis (only a few editions); therefore the join conditions are derived from the match between the

no. 23/May, 2014 | www.todaysoftmag.com


management

Code Kata

T

he word Kata comes from martial arts: it is the Japanese translation of the word form. Kata is used to describe detailed, choreographed patterns of movement that can be practiced either alone or in pairs.

Tudor Trișcă

Tudor.Trisca@msg-systems.com Team Lead & Scrum Master @ msg systems România

It can also describe other actions from martial arts like: training, highly detailed simulated combats, and others. Katas were learning and teaching methods through which successful combat techniques were preserved and passed on. Practicing Kata allowed a group of persons to engage in combat using a systematic approach, rather than as individuals in a chaotic manner. The main goal of using Katas in martial arts is transmitting proven techniques and practicing them in a repetitive way. This helps the learner to develop the ability to execute these techniques and movements in a natural way, just like a reflex. For reaching this level, the idea is not to do this systematically, but internalizing the movements and techniques, so that the person can adapt these techniques to different needs.

procedural memory over a century ago. After a lot of research, it was proven that merely repeating a task alone does not ensure the acquisition of a skill. Behavior must change as a result of the repetition. If the behavior change is observed, then one can say that the new skill was acquired.

Code Kata

Dave Thomas was the first to introduce the idea of Katas as a learning technique in programming. The approach is really simple: a code kata is a simple coding problem intended to be easily solved, that you solve again and again, repeated to perfection. The idea is to help the programmer solve the given problem in a better way with every attempt, while the subconscious learns detailed problem/solution pairs that might help in other problems. Katas can also be done by adding Practicing other challenges like some limitations, such Practicing as a learning method is ubiqui- as the use of a programming language other tous. There are many areas in which it can be than the one used daily. A Kata can be done applied, not only martial arts: either by one programmer, or in an organized • Playing a musical instrument group. • Improving one’s performance in sports • Preparing for public speaking, and Work is not practice others. Work can bring too much pressure on the inexperienced programmers: putting It has already proven itself as a fundamen- pressure on the need to deliver quality code tal learning method. Of course, it depends on using unfamiliar techniques; this can lead to many factors, but with the right guidance and frustrations, mistakes or failure to apply best dedication, a person can master a lot of things practices. There is not always enough support by practice. from mentors: the experienced programmers are often busy with finishing their assigned Procedural Memory task and do not find the necessary time neeProcedural memory is a type of long-time ded to help others grow. memory, more specific, is the type of memory Becoming a better programmer is about that can be used for techniques learning. This practice. How good would a music band be if is achieved by repetition of the same com- they only practiced while they were on stage? plex task, resulting in an enhancement of What kind of quality would a play have if the neural systems used to accomplish that the actors were only given the script an hour task. Psychologists started to write about before the show? www.todaysoftmag.com | no. 23/May, 2014

33


management Code Kata Tips & Tricks

How not to Code Kata

A lot of people think that Code Katas are about solving only the same problems in the same way. This will lead probably only to learning new shortcuts in the IDE used. As I previously mentioned about procedural memory: merely repeating a task alone does not ensure the acquisition of a skill. Behavior must change as a result of the repetition. Just as walking each day does not make you a master walker, and driving a car every day does not make you a superior driver, solving the same sets of programming problems over and over again will not make you a master programmer. Repeating the same thing over and over again without an increasing level of challenge actually results in the mind becoming complacent with the activity. If you want to do Code Katas, they must be challenging. Repetition will not help if the brain is not engaged. The brain must be challenged in order to exercise and create new neural pathways.

34

Pair-programming: team work is a key factor in the Code Kata sessions. It should enable learning to occur during implementation because the programmer not at the keyboard is observing, commenting and questioning, providing continuous feedback (but not dictating, criticizing or controlling). Completion must not be enforced: If there is too much emphasis on completing the task, programmers will start to sacrifice quality. This thing happens in production environments when too much pressure is placed on velocity over quality. If the aim for the session is to speed programmers up, they must be allowed to practice without feeling the pressure of completing the task within a set timeframe. In time it can be observed that the programming techniques become a second nature and the end result will be faster development and higher quality code. Inurement that at the end of the session the code is deleted: The idea to start with is that there are always multiple ways of solving the same problem and that is especially true for programming. If you ask ten programmers to solve a certain problem, most likely there will be ten different implementations. But which one is the best? Probably none of them, potentially a combination of some (or all) implementations, or maybe a totally different solution. The point is that it is good for the programmers to understand there is more than one way of doing something and that throwing away the code and starting again is sometimes the best option. Helping pairs to contribute equally: The programming pairs that participate in the session must be observed and helped to treat each other as equals. It is often difficult when there is a pair consisting of a senior and a junior but it is important that this is about both contributing equally. This is not a mentoring session. Strictness of time: From time to time, the trend will be to give a bit more time for developing. But this is because too much emphasis is being placed on task completion which must not be a priority. The programmers must be re-assured that it is

no. 23/May, 2014 | www.todaysoftmag.com

not about completion; once they understand this, they will stop asking for more time. Keeping the session fun, but intense: Background music never hurts (but it should not be a distraction), discussions must be initiated, rewards provided, whatever is needed to stimulate the programmers. The focus must be on removing anxieties and reassuring that making mistakes and failing is good. Communication encouragement: The room where the session is being held should always be buzzing with discussions, ideas, but all the time highly focused on the task. If the room is quiet and people are disengaging from the process, then something is wrong. Finding challenging Code Katas, but not out of reach: The mistake of choosing a highly complex task for the session should not be made, because the enthusiasm will fall and the programmers will disengage from the process.

Conclusion

The idea to start with when you want to initiate or to participate to a Code Kata session is the fact that the point of a Kata is not arriving at a correct answer. All that matters is the knowledge you gain along the way. The goal is the practice, not the solution.

References: „Code Katas”, Bart Bakker, 2014 „Code Kata”, Joao M. D. Moura, 2014 „Using Code Kata practice sessions to become a better developer”, Kirsten, UVd, 2013 „Performing Code Katas”, Micah Martin, Kelly Steensma, 2013 „Why I Don’t Do Code Katas” John Sonmez, 2013 „Code Katas: Practicing Your Craft”, 2011 „Code Kata – Improve your skills thrugh deliberate practice”, 2013


management

TODAY SOFTWARE MAGAZINE

R

Effective use-case analysis

equirement elicitation is commonly mentioned among business analysts, but where do these requirements originally come from? It is quite complicated to understand and start collecting the required functionality as the product stakeholders’ explanations are often ambiguous, diverse and lacking of technical terminology. But how do you manage to identify and write down all the functionalities required by the stakeholders, the user roles and their actions? The answer is pretty simple – you have to perform a use case analysis in order to capture the contracts between the system and its users and then refine the system’s use cases and obtain an organized and more complete view. Writing effective use-cases is one of the most important responsibilities of a business analyst as it provides the elicited functional requirements with enhanced objectivity and reliability, allowing for a proper representation of the functional and behavioral requirements of a software product. As opposed to the textual-only representation, use case modelling allows for a stepwise description of the independent or interconnected sets of actions, which are performed by the designated actors. Once written, use cases become deliverables of wide use, being beneficial for several participants in the project delivery lifecycle, namely: stakeholders, BAs, development and testing teams, technical writers, UI/UX designers, software architects and even the product’s management teams.

Who & What

In order to have a solid starting point in collecting the correct requirements of a system, a BA should first question about who will use it, who will gain business value by interacting with that system or who will achieve a business goal by means of that interaction. The answer is – the ACTORS – namely persons or software components that make use of the system under discussion. It is recommended to give a name and a short description for each actor, in order to avoid confusions, role overlapping and/ or ambiguity. The actors are outside of the system and they can be of several types, for example the main actors of a use case, the supporting actors of use cases, sub-systems or components of the system under design. After shedding light on all the actors, the BA must identify the activities or sets of activities performed by each of them. These are called use cases – and represent the

way in which the actors interact with the system, in order to obtain business value and achieve their goals. These actions must be independent, having names and short descriptions as well. The BA must also be aware that among these activities some are happy day scenarios, some are alternatives and there can also be error scenarios and only a deep understanding of the business will make them able to accurately determine and classify them. Also, in order to obtain accurate use cases, attention has to be paid to the pre and post conditions of each use case. So, the use cases are the core of a software product, as they describe its behavior and way of use, from the viewpoint of each actor. Use cases are a very powerful requirements modelling technique. The complete set of actors and use cases is known as the system’s use case model. This model can be represented as a written document or as a diagram, commonly both, since each provide a different level of detail of the use case, with actors represented as people, use cases as ellipses and their relationship by arrows, oriented from the actor towards the corresponding use case performed.

Study case

Let’s say we have a customer whose business is hosting events, such as conferences, in their building. The customer’s building has 3 conference rooms each able to accommodate 100, 150 and 200 people, respectively. The customer provides a complete service for their customers that include: • Renting one or more conference rooms for their event • Promoting their customer’s event • Maintaining a list of reservations • Managing reservation confirmation

/ cancelation • M a n a g i n g n o t c a n c e l l e d nonattendances • Making sure that there are no reservations beyond the total capacity of the rented rooms At the moment, our customer maintains all this data using spreadsheets and wants us to build a web-based, IT solution that helps them manage and administer his services. So, how should a BA proceed in understanding the actual requirements for the IT solution? A little ‘domain knowledge’ in hosting events would help, since it would highlight the fact that, usually, rooms can be split by means of fake walls and thus the number of rooms is actually variable, depending on context. For simplicity, let’s assume that this time it is not the case. Also, a first look at the customer’s description would highlight the fact that the rooms are actually not the same size, so perhaps they might have different pricings. Furthermore, ‘domain knowledge’ helps us again in that it makes us ask about overbooking policies. In any case, we should never make assumptions but rather ask the customer about all of these topics as soon as we think about them! So, in our simple scenario, we’ve asked the customer all these questions and have concluded that the room number is fixed (there is no possibility to dynamically split the rooms), there are no overbooking policies and, indeed, rooms have different prices that the customer wants to be able to change, and voila, a hidden requirement: management of the room prices! Now, to the actors! Who are they? Some of them are obvious: the people reserving,

www.todaysoftmag.com | no. 23/May, 2014

35


management Effective use-case analysis confirming and cancelling reservations and our customer’s customers that need to be able to rent rooms. But is this so? Let’s ask our customer and see! Upon asking the customer, what do you know? - it appears that it actually isn’t! Customers of our customer never actually use the system when booking rooms since our customer does not have the means for e-payment, so it is someone from their staff that actually assigns one or many rooms to one of their customers. So, the actual actors are the people attending (via reservations) to the event and our customer’s staff. It is very important to name these actors using domain specific names! We wouldn’t want to call the person attending an event the Chef – that might be an appropriate name for a restaurant application rather than our conference room booking application. So, asking the customer, as always, we find out that they actually refer to these people as Attenders – so one actor would be the Attendee. Alright, so what should we call the staff member that assigns rooms to their customer? Our customer says that they call these staff members precisely that, staff members – so one actor would be Staff Member. Putting it all together let’s try a simple, high-level, use-case diagram to describe these facts:

So, how does this look?

Fig 1 – First attempt at a use case diagram

Well, the first thing we spot is that we’re missing parity! We have an ‘Assign rooms to customer’ use case but it appears that we’re missing the ‘Un-assign rooms from customer’ use case. But, what a surprise, when we ask our customer about this they say they don’t have a clue about what we’re talking about with un-assigning rooms! This is a breakthrough for us since it means that we are about to deepen our understanding of the domain! After asking the customer about how they actually manage room assignment for their paying customers they say that they actually don’t do it like we assumed! What they do, is they create an Event, with a customer, a name and a description and they actually assign rooms to that Event. Fig 2 – A refined use case diagram They never un-assign anything, they simple go and delete the Event from their spreadsheets once the Event took place! That is interesting, and we immediately guessed that the allow people to make reservations to an Event that has already Event must also have a date and that it wouldn’t make sense to happened!

36

no. 23/May, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE Our customer immediately confirmed that it is so! Another try at the use case diagram renders this result from Fig 2. We are still not happy with our previous parity problem. We are experienced business analysts and we know one thing: Users make mistakes! So, whilst using spreadsheets, our customer wasn’t aware about the fact that there is indeed a hidden “Un-assign rooms from event” use case that takes place by simply overwriting or deleting a row in the spreadsheet. However, since we’re building a dedicated IT solution the use case must exist explicitly if users are to be able to correct erroneous room assignments. So, we insist and the customer accepts. Therefore:

it is our BA job to ask them!)! Surely enough, the customer answers and tells us that everybody should be allowed to attend! So, the Administrator would not be in charge of granting the Attendee role – anyone has it by default! Looking at the use case diagram, we’re trying to run scenarios based on the initial customer provided description to see whether we’ve covered everything. Something is missing: Nonattendances! Our customer wanted to know if someone has made a reservation but hasn’t actually showed up without cancelling the reservation. If only there was a way of distinguishing the people who actually do show up from those that don’t! Hmm, and this is when we realize that the Confirm reservation use case is a non-sense. What does it actually mean to confirm a reservation? Is it confirmed by the system in that your seat is booked for you and you won’t lose it? Is it confirmed that you actually showed up? The customer clarifies that the intent would be to specify that somebody actually shows up. Well, in that case it should be entitled ‘Confirm attendance’ rather than ‘Confirm reservation’ and it should belong to the Staff Member, not the Attendee. Confirming a reservation makes no sense since reserving a seat would either fail, since no seats are available, either succeed, because there are no overbooking policies that might cancel a reservation!

Fig 3 – A more complete use case diagram

Of course, we didn’t want to go into it until now since it is a generic and thus not domain specific topic, but there is the aspect of defining staff members that the system will authorize! It is a rather generic topic but there usually tends to be an Administrator who is able to assign roles to people. So, while it is simple enough to plot the Administrator actor in there with the power of assigning the Staff Member and Fig 5 – A more realistic use case diagram Administrator roles to people, the question emerges: How are Attendees created? This, like many others so far, is a question that So, can we now support all the requirements that the customer only the customer can answer (but they won’t do it by themselves, originally issued? • Attendees can make and cancel reservations • Staff members can confirm attendances thus distinguishing who actually attended. • Staff members can create events and can assign and unassign rooms to and from events. • Staff members can delete events. • Administrators can grant and revoke roles (Staff Member and Administrator).

Fig 4 – Enter the Administrator

What’s still missing is the ability for someone, presumably, the staff member to see a nonattendance report showing the people that did not attend whilst not having canceled their reservations. We ask the customer about this and they confirm! So we need a use case for that as well! Also, what about deleting an event? Do we actually delete an event or do we just mark it as having taken place so that no reservations can be made for it? It matters because not deleting an event www.todaysoftmag.com | no. 23/May, 2014

37


management Effective use-case analysis allows us to mark the completion of the event whilst still providing us the chance to run reports (such as the nonattendance once the customer just confirmed). As always, upon asking the customer, they confirmed that they want the event to remain in the history of the system and that the action should be called ‘Close’ the event. While closed, no reservation can be made for that event. Also, once closed, the event can never be opened again since the action should only be done after the event actually took place! Considering that, we ask whether closing an event shouldn’t actually be an automatic action employed by the system at the proper date (remember that we previously discovered that apart from customer, name and description, an event also had a date). The customer happily approves our idea! Therefore:

Use case limitations • Use cases don’t represent the complete set of requirements of the software system under discussion. • There are projects where it is considered more complicated and time consuming to write use cases than just user stories. • As there is no standard method for use case development, each BA must develop them according to his own vision of the requirements. • When dealing with projects having complex flows, the corresponding use cases may be difficult to be understood by the end users and/or stakeholders. • The <<extends>> and <<includes>> relationships used in an UML use case diagram are complicated to understand and interpret and may create confusion, leading to a lack of proper use.

Conclusions

In conclusion, use case analysis is extremely important in and off itself, regardless of the deliverables, which are of course extremely valuable. Why deliverables are valuable is obvious, since they represent the core of the system’s functionality and behavior and are used by a very diverse set of people from business analyst, architects, users and so forth for everything from describing the system to validating the architecture using many use case based scenarios. But why is the mere act of performing a use case analysis important? Because it allows us to methodically decompose the high-level requirements to an ever detailed and profound domain understanding that helps us uncover the real requirements! Only Fig 6 – The complete use case diagram by performing a thorough use case analysis can we hope to uncover the underpinning domain model and develop an IT system Finally, it appears that we now have the complete use case dia- truly aligned to the actual business requirements. gram that supports all the customer requested scenarios and it enhances them with automations and administrative capabilities! Let’s not forget our hidden requirement: Managing room prices! Asking the client, they say that the room prices rarely modify and, for the moment, they don’t want the implementation of this use case! We run the diagram by our customer and they’re very happy! Use case advantages • Use cases are real requirements. • Use cases provide a very good coverage of the actors involved and their specific needs from the product. • The main flows described by means of the use cases provide the BAs with a general acceptance from the stakeholders that those are the expected functionalities. • Use cases are of great help in project planning activities, if talking about release dates and priorities. • Use cases represent a solid basis for product cost and complexity early estimations and negotiation. • Use cases represent a means of communication between the product’s stakeholders, as they present the actors and the business critical flows. Anita Păcurariu anita.pacurariu@endava.com • Use cases support for drilling through the happy day scenarios in order to discover alternative situations and limitations. Business analyst @ Endava • Use cases represent a good starting point for the product’s test design and end-to-end testing.

38

no. 23/May, 2014 | www.todaysoftmag.com


management

TODAY SOFTWARE MAGAZINE

Skills over Technology

T

he natural career of a software developer is: junior programmer, senior programmer, technical lead / team leader, optionally architect and then it turns into management. There’s something paradoxical about this path: the career that started with writing code ends in not writing code at all anymore. After all, how can you keep up with all the new and shiny stuff that appears in technology each year? A new type of career has surfaced in the recent years, one that’s much more interesting. We will look in this article at people who don’t fit this profile, at those developers who still write code and can help others even if they are 40, 50 or 60 years old. Robert C. Martin. Michael Feathers. Rebecca Wirfs-Brock. How are they different? How can they keep up with changes?

limited operations provided by the http protocol? This surely had to appear later on. Well, let’s take a look at what Alan Kay, one of the founders of Object Oriented Programming had to say about this paradigm:

“I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages Technology Fundamentals Don’t Change That Much (so messaging came at the very beginning -- it took a while to see When was unit testing invented? Test Driven Development? how to do messaging in a programming language efficiently enough Use of abstractions for changeable software design? SOLID to be useful)”3 principles? They all sound like shiny new things. After all, the core books “every object should have a URL”4 were published in the past 10 years or so. But are they really new? Barbara Liskov had a keynote at QCon 2013 entitled “The The second quote is from a talk he did in 1993. That’s REST Power of Abstraction”1. She talks about the initial conversations services, in one short sentence, 20 years ago. about changeable design that the small community of develoIf technology fundamentals don’t change that much, then what pers had back in the 1970s. Around the same time, she gave the does? name of one of the SOLID principles, “The Liskov Substitution Principle”, after a casual remark made on a bulletin board. That’s Implementation Gets Easier 40 years ago! 40 years since one of the core OOP design principles 20 years ago, if a programmer had to make two objects was invented, a principle that millions of programmers use every communicate over a network, a lot of wizardry was involved. A single day. 40 years since abstractions were introduced to allow now (hopefully) forgotten technology called CORBA was the stanchangeable design. dard way to do it. In order to make it work, you had to understand But maybe other things are revolutionary. Web services are a it, write code in a specific way, figure out the problems etc. It took new idea, aren’t they? How about REST architecture? a lot of man days to fix and make it work, unless you were one It’s true that a few things had to happen for web services to of those people who could visualize the bytes moving between appear. First, the business need. Second, the standardization. processes. Third, the expansion of the web. However, if you look at web serToday, the standard way is to write a web service with a defined vices, they are based on the following idea: Compose complex interface, something that any programmer can do in a few hours functionality out of small components that do one thing well and (we’re assuming simple functionality here). The programmer has communicate through text messages. Strangely enough, this idea no idea how the communication happens (unless interested in the is in the UNIX design principles2 that was defined in the 1970s: subject), just that it works when the code is written in a certain way. Debugging can still take some time, but it’s easier with spe“Rule of Modularity: Write simple parts connected by clean cialized tools. interfaces. 15 years ago, writing a small program required knowledge of Rule of Composition: Design programs to be connected to other the way memory is allocated, something that generated many man programs. days of seeking the source of an error with the “helpful” message: Rule of Separation: Separate policy from mechanism; separate “memory corruption error: #FFFFFF”. Today, most developers interfaces from engines. have forgotten about pointers and dynamic memory allocation Rule of Parsimony: Write a big program only when it is clear by because the programming language and platform takes care of it. demonstration that nothing else will do.” The difference is not in the fundamentals. It’s in the implementation. And implementation gets easier and easier as time goes by. How about REST services? Exposing resources using the But if implementation is easier, why do we keep having problems 1 http://www.infoq.com/presentations/programming-abstraction-liskov 2 http://www.catb.org/esr/writings/taoup/html/ch01s06.html

3 http://www.purl.org/stefan_ram/pub/doc_kay_oop_en 4 http://www.catb.org/esr/writings/taoup/html/ch01s06.html

www.todaysoftmag.com | no. 23/May, 2014

39


management Skills over Technology in software development?

We Face The Real World

Our own definition of architecture is “when programming meets the real world”. In programming, everything is clean, repeatable, reliable. The computer doesn’t give two different answers to the same question, unless programmed to do so. The real world is different. Not everyone uses the same date format, calendar or alphabet. Time can change depending on timezone or the relativistic speed. Servers fail. Networks go down. The fundamental difficulty of programming has always been to translate ambiguous requirements to very precise code that is resilient to the lack of dependability of the real world. Yet, for many years, this fundamental difficulty has been hidden beneath implementation issues. Programmers had enough challenges related to memory allocation and networking communication that they couldn’t face the real world. Therefore, many of them were shielded from it. Once the implementation was simplified, the fundamental difficulty of programming became visible. We talk less about communication between services and more about changeable design, because change is part of the real world. We talk less about memory allocation and more about unit testing, because everyone makes mistakes. And this brings us to the conclusion:

Skills Over Technology

Technologies change. Yet the fundamentals of programming haven’t changed in the past 20 years or so. They probably won’t change dramatically in the next 10 years. If you want to be in touch with programming for many years from now on, like Robert C. Martin, Michael Feathers or Rebecca Wirfs-Brock, here’s what you need to do: Master the skills and the fundamentals of programming, not only the technology you work in today This doesn’t mean you shouldn’t know the platform you’re using, just to understand that it’s merely a tool that helps you do what you do: turn ambiguous requirements into changeable, working, ready to face the real world and precise source code.

Technology Independent Skills

We mentioned in the article a few technology independent skills. Here’s a list that

40

doesn’t claim to be complete, but we think will allow your career to grow, no matter it is a good start: the technology changes. Programming language features: • Object Oriented Paradigm • Functional Paradigm • Dynamic languages • Strong typing and weak typing

Conclusion

The fundamental difficulty of programming has always been to translate ambiguous requirements to very precise code that is resilient to the lack of dependability of the real world. The answer to this Software design: challenge has been developed in the past 50 • SOLID principles years and hasn’t changed that much. The • The four elements of simple design only thing that changes is the implementa• UNIX Design principles tion, usually by getting easier. • OOP Design patterns Mastering the fundamentals of pro• Integration patterns gramming and the associated skills are your • (lots more) best bet for a strong career, independent of future technology changes. The “Skills Parallel programming: over technology” mantra doesn’t mean you • Shared resources shouldn’t know the technology, just that the • Synchronization mechanisms skills are more important in the long run. • Ways to avoid deadlocks If you want to go on this path, you are • Concurrency patterns not alone. Software craftsmanship communities in your city can help (for example Validation: the Agile Works community in Romania • Automated testing: The pyramid – http://agileworks.ro) and craftsmanship of tests. Structuring tests for large conferences such as I TAKE Unconference applications. – http://itakeunconf.com are organized • Unit testing: Principles. Stubs and with this purpose in mind. Join them and Mocks. grow your career as a software craftsman, • Design by Contract not as a future manager. • Code review • Pair programming (yes, it’s a skill) Architecture: • Managing risks • Communicating technical constraints to non-technical people • Communication with developers • Styles of architecture: SOA, REST, etc. • (many others) Refactoring: • Identifying code smells • Manual refactoring • Refactoring using automated tools • Write your own refactoring scripts (eg in vim) Dealing with code you’re afraid to change: • Writing characterization tests on existing code • Changing the least amount of code when fixing a bug or adding a new feature • Understanding code fast, without reading it much These skills are independent of technology. Once you master them, you can use them in a completely new technology. This

no. 23/May, 2014 | www.todaysoftmag.com

Alexandru Bolboaca

alex.bolboaca@mozaicworks.com Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Adrian Bolboaca

adrian.bolboaca@mozaicworks.com Programmer. Organizational and Technical Trainer and Coach @Mozaic Works


management

Chasing The Perfect Scrum

O

Bogdan Mureșan

bogdan.muresan@3pillarglobal.com Director of Engineering @ 3Pillar Global

r the moment when trend beats logic. A while ago one colleague of mine wrote a very interesting article about best practices on Agile methodologies. If looking at the title you would expect a set of rules which would allow you to be the best Agile person on the planet, we pleasantly discover that in real life these rules are actual guidelines and adaptations to thousands of different situations. This article made me a little more aware of the next situation: how many times did you actually hear somebody saying: “On my project we implement Scrum 100%”. It’s like somebody would say “I have a perfect life”. It happens only from time to time and not on Earth. I am working based on Scrum principles for almost 8 years now. I was involved in a few projects and also had the pleasure to discuss with a lot of people outside of current work: friends in the domain (where else can I have friends if not “in the domain”?) and also in a lot of interviews. I don’t recall somebody saying ever that works in a perfect Scrum environment. There is always something missing. Can be one daily meeting in a week that the team decides for a good reason not to hold it. Can be related to product owner role which is not well defined: sometimes we found him on client side but after 2 weeks work with him we clearly see that he is not an actual product owner. Sometimes the product owner can be on our side, knowing very well what he is doing, but again is not good because is not on client side. Can be the retrospective meeting which again for a good or bad reason is taking place only once at two iterations (not by the book). Let me try to go in more detailed with three examples where adapting overrules the theory. First example would be related to sprint length. If we take 3 different teams with sprint lengths of one week, two weeks, three weeks and ask the team members the question “How long is your sprint?”, in all three cases, almost every time, the answer would be “I know it’s not perfect Scrum, we work with x weeks sprint. Who’s saying it’s not perfect Scrum? I know there are a lot of debates on forums about this, and the bottom line would be that it’s all about the regular cycle

of delivering not the length of the cycle. The context is the key to define this. We have a case where client changes the priorities very often based on the need of his clients. We found out that this works out great with one week sprint. And that’s fine. We have another client which is able to provide clean requirements for three weeks work and that’s how we go. We have two different contexts, two different situations, very good results on both. It is all about analyzing the right context, the right solution for that context, it’s all about adapting. Guidelines are great as a baseline but from there agile word comes in. Another situation I encountered very often is related to end of sprint meetings. Basically we would tend to have sprint review meeting and retrospective meeting. Now the funny thing comes up. Based on the fact that 63% of the statistics are actual random numbers, I can precisely say that 80% of people I’ve talk with are not satisfied about the end of sprint meetings. Either because the name is wrong (because the etiquette is so important, isn’t it?) either for minor stuff like: it’s not a real sprint review meeting because the client goes through what was done not us (and how proud we are when we present the result of our work and in the same time the feelings we have when we say “it perfectly worked until now, something must have gone wrong at the last merge before the demo” are unmatchable and we won’t trade them for anything). The table which stores the retrospective ideas has the wrong header; we use to have totally

www.todaysoftmag.com | no. 23/May, 2014

41


management Chasing The Perfect Scrum different notations in a previous project. Tight to notation, people tend to forget the scope which is: review the work, present it to stakeholders, plan what was not complete and improve the process. I’ve seen projects where a better efficiency was obtain when client accepted the stories during the sprint so that the sprint review meeting would have been redundant. I’ve seen project where the retrospective meeting was done only once at 3 iterations. All was dictated by the adaptation to the context and there were successful cases. And as a small cherry on the top: if the ideas on the sprint retrospective remain just ideas on a table, and they are not actually reviewed and used as a starting point for next process improvement and we are doing the sprint retrospective just to mark another scrum “must to have”, it’s not the end of the world if we don’t do it anymore. Dump the waste. Save the time for something you find more useful. But please don’t complain after that. The most controversial sample came into my mind in an internal discussion with my colleagues. How and when do you integrate QA process into the sprint? Performing a quick retrospective over the projects I’ve been part of and also remembering lot of discussions on these theme, I was able to quickly enumerate several different approaches: part of the stories and story estimations; decoupled of story but in sprint cycle; decoupled by stories and scheduled for next sprint cycle; in sprint cycle scheduled at the end of iteration; in sprint cycle scheduled after each tasks and many more. Could be a code feature freeze faster that sprint cycle ends in order to allow bug fixing at the end or

there is no such thing. If for sprint length the number of possibilities is somehow reduce to mainly 1-2-3 weeks (anyway very limited) in this case we can have ten different approaches for ten different teams. And each one might be correct; each one could have found what was suitable in that context. Honestly sometimes I wish I have owned the recipe for this because finding what is suitable for the context is not easy, but where would have been the pleasure than? And most important, once you tried something and doesn’t work, don’t be afraid to change. Afterward it’s agile. In all these situations people tend to forget the end result. Is the team working well together? Does the team deliver what they plan to deliver? Is the client happy? No, what matter the most is that the trend, a perfect scrum world on which we all are dreaming, is not met. And of course the human nature which most of the time is resistant to change. Most people prefer to stick to the easy way, to stay on the safety net and refuse to meet the actual scope of this methodology: to be agile. Logically, if the answer is yes to the above questions and you don’t know that everything that happens now in the world follows the pattern: groom, plan, act, meet daily to confirm that you are acting, show, put down what you learned and do all this in cycles by x weeks, then yes, everybody would be happy. Everybody would be happy that they are doing the right thing, which means actually thousands and thousands different right things. Bottom line for all this is that we shouldn’t be afraid of adapting, because that is what agile means. We want to be Scrum followers and that’s perfect. But

let’s not be so tied to the trend and to the concept that we forget that Scrum comes near the Agile word. It’s ok to start with the Scrum mindset. It’s perfect when we don’t have to change a bit and everything works like a charm. But realistically this doesn’t happen. It’s ok to adapt, to improvise and to find the right implementation for your solution. It’s also what the books are stating but everybody passes very fast through these lines in order to find the rules that would lead them to the perfect Scrum. Start with the concept, see how it works, and find the right way for you which will allow you the best achievement of your targets in the given context. That would not make you less trendy, that would not exclude you from the Scrum world but that will very probably make you more efficient and happy with what you are doing.

Our core competencies include:

Product Strategy

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

42

no. 23/May, 2014 | www.todaysoftmag.com

Product Development

Product Support


management

TODAY SOFTWARE MAGAZINE

A

5 Simple Rules for an Efficient Campaign

2013 research shows that 77% of the consumers prefer receiving an email instead of a social media message or an SMS. In 2014, an email campaign with a powerful marketing content can be a powerful, revenue generating tool.

Before planning any campaign you sho- the images used in the email (50%). Other uld make sure that: elements that can be tested are: • The data base only contains users • the From field who have agreed to receive email • the hour of the day when the communications from the company. Campaign is sent • Users who have bought a product or • the day of the week when the service in the last 18 months. Campaign is sent • the used Landing page #1. Segment your audience • the audience that receives the email One of the most frequent mistakes in email marketing is sending the same mes- #3. Define a pattern and send the email sage to all the customers. Customers are at the same hour different, so, their aria of interest is differOnce you have found out, during the ent. They can be divided according to: testing phase, which the best moment of • Favorite/ bought products the week and day is to send a campaign, it • The average amount of money spent is recommended that you remain consison an order / the entire amount spent in tent. Of course, depending on the period a month and season of the year, the time and day • The interest they show in your emails may change. • The experiences they have so far had with your company (positive or negative) #4. Adjust your layout for the mobile Did you know that 75% of the smartDepending on the criteria above, you phone users will delete the received email can generate a personalized message, tar- if its layout is not adjusted to the mobile? geted on the client’s interests. Moreover, the emails opened on a smartphone outnumbered those opened on a PC #2. Test in 2013. The most tested item of an email is the That is why it is important for your Subject line (75%), followed by content/ newsletter to be optimized for all devices. message (61%) and call-to-action button or The optimization can be done starting with

the layout and continuing with the content and links inserted in the email.

#5. Personalize your email as well as the user journey behind it

When you are planning an email marketing campaign, take a step back and look at the campaign as a whole. Let’s suppose a sport items shop will soon go into the sales period. When the user opens the received email and he is interested in a product, once he has reached the product page, the message must be reiterated. One can also create special landing pages, in order to facilitate and shorten the buying process. Once the above rules have been complied with, do not forget the most important one, namely to request feedback from your clients! Select the clients that are most attached to the brand and send them an email with a feedback questionnaire. You will find out new things about your campaign. Ruxandra Tereanu

ruxandra.tereanu@betfair.com Conversion Analyst @ Betfair

www.todaysoftmag.com | no. 23/May, 2014

43


management

Big Data and Social Media: The Big Shift

S

ince social media platforms expanded through our lives, the amount of data exchanged across them has sharply upsurged. We write texts describing an idea, an opinion, a fact; we upload images and videos; we share our preferences by using simple buttons (“like”, “favorite”, “follow”, “share”, “pin” etc.); we accept in the network people we know very well in our real life and people we have never met before and probably never will – … and everything goes in the network almost in real time! Suddenly, we realize that the unit measure of the data handled connections (almost 2.5 connections per person on earth) in a given amount of time reaches the order of exabytes. This data is not only big in volume, but is also extremely diverse and Veracity related: it moves at incredible speeds. The information contained in it is • 1 in 3 business leaders do not trust the information they relatively incommensurable. Fact is Facebook, Twitter, Pinterest use to make decisions can see when you fall in love, what is your mood, where you are • 27 % of respondents in one survey were unsure about and many other behaviors that you decide to show. how much of their data was inaccurate The question is: what can we do with this massive amount of data created through social media? What does Big Data actually mean? At first sight we can describe Big Data as very large and Quick Facts complex data sets, impossible or hard to handle with classic data According to the information gathered by IBM in a reported processing tools. The expression itself is being used as it originated based on sources provided by Mc Kinsey Global Institute, Twitter, from English; we must note that French specialists are currently Cisco, EMC, SAS, MEPTEC, QAS the following interesting facts translating it as “grosses données” (big data) or “données massives” worth paying attention to: (massive data) or even “datamasse” (datamass) as in “biomass”. Volume related: The novelty of the concept and the blurred definition lines prevent • Facebook ingests approximately 500 times more data each the localization of the term. day than the New York Stock Exchange (NYSE). In 2012, Gartner (that has somehow contoured the term in the • Twitter is storing at least 12 times more data each day than early 2000’s) has updated the definition: “Big data is high volume, the NYSE. high velocity, and/or high variety information assets that require • It’s estimated that 2.5 quintillion bytes (23 trillion gigabytes) new forms of processing to enable enhanced decision making, of data is created each day insight discovery and process optimization.“ • 6 billion people out of 7 billion people (world population) The above definition outlines the dimensions of Big Data – the have cell phones well-known 3Vs – volume, velocity, variety. Yet, the great thing • It is estimated that 40 zettabytes (43 trillion gygabytes ) of about this formulation is that it opens multiple perspectives on the data will be created by 2020 (300 times more than in 2005) Big Data concept. Recently a 4th V has been attached to the above definition: Veracity. We may note a technology view, a process Variety related: view and a business view. • 300 billion pieces of content are shared on Facebook every month Social Media Analytics and Big Data • 400 million tweets are sent per day by about 200 million Since one of the essential characteristics of Big Data originated monthly active users from social media is that it is real-time or near-real-time. This • 4 billion hours of video are watched on youtube every month gives to the exploratory analysis a wide perspective on what is • By 2014, it’s anticipated that there will be about 420 million happening and what is about to happen at a certain time in a wearable, wireless health monitor certain area. Each fundamental trait of Big Data can be understood Velocity related: as a parameter for quantitative, qualitative and exploratory • NYSE captures 1 TB of trade information during each tra- information analysis. ding session • Volume - There are two types of data that social media • Modern cars have close to 100 sensors that monitor items platforms collect: structured and unstructured. In addition the such as fuel level and tire pressure collection source is diverse: HTM (human to machine), MTM • By 2016 it is estimated there will be 18.9 billion network (machine to machine) or sensor based. For social scientists the

44

no. 23/May, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE total mass of the data allows the definition of multiple classes, criteria and the refining of analysis sets and subsets. • Variety - The data formats vary from text documents, tables to video data, audio data and many more. This lifts the data analysis to a higher complexity level; therefore, the statistical models will also be adjusted in order to obtain viable information. • Velocity - Speed is a key aspect in trend and real-life phenomena analysis. The faster the data is generated, shared and understood the more information it can reveal. By analyzing the spreading speed of a certain data set, one can grasp the potential impact of the information it contains on a specific social group in a defined territory. Another interesting aspect is that one can track the data distribution chain. • Veracity - For the seasoned data analyst it is essential to be able to evaluate the truthfulness, the accuracy and honesty of the data put to analysis. Here the discussion goes around the responsibility of the initial data generator, the goal for which the data is being released and the reactions of the receivers.

then analyze to the new model where we collect all and after we try to find significant patterns. The new analysis model has its own risks, but it also opens the way for a new generation of data analysts and scientists. At this point, I consider that this is the main impact that social media had upon the way we see Big Data.

Big Data Management

One of biggest challenges at the time being is to build the proper tools and systems to manage big data. As real-time ore near-real time information delivery is one of the key features of big data analytics, the research aim to set-up data base management systems able to correspond to the new requirements. The technology in progress involves the following: Storage: For the storage and retrieval of data, the underlying NoSQL developments are best represented by MongoDB, DynamoDB, CouchBase, Cassandra, Redis and Neo4j. Currently they are known as the most performing document, key value, column, graph and distributed databases. Software: The Apache Hadoop set counts Cloudera, HortonWorks and MapR. Their main goal is to expand the usage of big data platforms to a more diverse and capacious user range. Secondly these technologies focus on increasing the reliability of big data platforms, to enhance the capability of managing them and their performance features. Data Exploration and Discovery: Big data analytic discovery is a hot research and innovation topic. Major developments have been done by Datameer, Hadapt, Karmasphere, Platfora or Splunk.

The Opportunities

When dealing with a completely new size level, the capture, the storage, the research, the distribution, the analysis and the visualization of data must be redefined. The perspective of handling big data are enormous and yet unsuspected! It is often recalled the possibility to explore information shared in the media, to acquire knowledge and to assess, to analyze trends and to issue forecasts, to manage risks of all kind (commercial, of insurance, industrial, natural) and phenomena of all kind (social, political, religious, etc.). In geodynamics, meteorology, medicine and other explorative fields – big data is ought to improve the way the processes are being deployed and the data interpreted.

The Big Shift

In order to answer our initial question, the best thing we can do with this data mass is to EXPLORE it. As simple as it may seem, this statement has deep implications on the way we see data analysis in the nearest future. The model is shifting from the traditional model in which we plan, collect data

Diana Ciorba

diana.ciorba@codespring.ro Marketing Manager @ Codespring

www.todaysoftmag.com | no. 23/May, 2014

45


programming

Onyxbeacon iBeacon Management Platform

i

Beacon is a popular term among mobile developers these days. The technology enables an iOS device (with at least iOS 7 installed) or an Android device (at least 4.4 version required) to wake up applications on devices in close proximity. It’s mainly used for indoor positioning, you can think of it as complementary to GPS because it can also give information regarding where the user is indoor: are they close to the shoe store or are they looking at the chocolate isle in a hypermarket. The iBeacon works on Bluetooth Low Energy known as Smart BLE. A couple of potential applications could be: 1. iBeacons specific devices could send notifications of items nearby that are on sale or product information for items customers are looking at, 2. enable mobile payments instead of NFC 3. provide valuable information while in a certain indoor location Let’s see a typical example of a use case regarding iBeacons. Mihai is a manager of an electronic retail chain. Geanina is a customer of the retail chain. She has the retail chain’s mobile app on her iPhone 5S. She’s in a certain store and allows the mobile app to monitor her location inside the store(she’s looking at a Sony TV). Thanks to the iBeacon technology Mihai can give Geanina relevant ads to her shopping history and current location (for instance a discount on the Sony TV she’s looking at). That’s just a big case.

How does an iBeacon identify itself?

An iBeacon device identifies itself via a combination of 3 customizable values: UUID (128 bit), major, and minor (16 bit each). In the example we had above, UUID would be an identifier for the retail chain, the major could identify the store number and the minor could identify a special location inside the store (entry point in the store, a certain shelf or checkout). The signal the iBeacon broadcasts allows you to calculate the approximate distance from the smartphone and thus know where the user is inside an indoor location.

Are there any privacy issues regarding the iBeacons?

develop our own beacons which are iBeacon compatible and can be used by businesses in their stores to enable proximity features in their apps. On top of that we developed software in order to help mobile developers make use of this technology. First of all we have iOS and Android SDKs, which mobile developers can use to leverage the beacon functionality.. We also developed a cloud backend, which is used for Beacon management, advanced scheduling of content availability, and API access. Mobile apps can integrate the SDK to add support for beacons and backend API.

OnyxBeacon Platform Components OnyxBeacon iOS SDK iOS OnyxBeacon SDK enables iOS developers to add support for iBeacons and OnyxBeacon backend in their apps, bringing to the users the iBeacon experience. The SDK is easy to integrate and use, with just a few steps the app will start receiving notifications from OnyxBeacon backend. iOS OnyxBeacon SDK wraps iBeacon protocol handling, iBeacon management, notifications management, communication with OnyxBeacon backend and exposes simple calls for receiving notifications, ready to use and present to users. iOS SDK is provided in the form of a sample app that contains the following modules: • OnyxBeacon library - contains the logic for managing Beacons, Coupons, Beacon management. • Sample App - an app that shows how to integrate the library and use the interfaces provided by the library • AFNetworking - the 3rd party library used for running requests • Facebook framework - used in sample app to get user information. OnyxBeacon backend defines several manageable entities that define the end user experience. Each entity is scaled to provide flexibility in defining the content that will be offered to the App user.

One of the big misconceptions with the iBeacons is that they track you. That is not correct. The only thing the devices do is broadcast a signal to inform the application of the proximity. They provide data about the indoor location you are in with a better precision than a GPS. This in order to serve the customer with specific relevant, context aware information that customer might need once in a specific location. OnyxBeacon Backend Entities

Now, who is OnyxBeacon and what is our role in this ecosystem?

Application Bundles

Define the needed information and authentication tokens We’re a Cluj startup which was founded because we want to required for a specific App (implementing the SDK) to properly help mobile developers add great experience for their users. We communicate with the OnyxBeacon API

46

no. 23/May, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE The following properties can be defined: Plans • Name - Application Name - Used to distinguish the different Define the Promotion Plan for a given Beacon. Multiple plans applications that use the SDK can be created to accommodate different time frames and/or • Description - Useful for adding comments and non critical Beacons. The following properties can be defined: data about the App • Name - ~ • Identifier - The App Identifier - This must be the one avai• Description - ~ lable when the App has been published to the mobile store / • Beacon - The beacon to which the plan applies during the development process • Time Frame - The time frame used (consequently, what • Secret - The initial secret for the App. Required in Coupon will be served to the end user) authentication

Company UUIDs

Promotions

Define a Promotion, having multiple Plans and a time frame Define a list of available Beacon Identifiers that will later be in which it is available. This gives even more flexibility to how the used to uniquely identify a Beacon content will be served to the end user, The following properties The following properties can be defined: can be defined: • Name - ~ • Name - ~ • Description - ~ • Description - ~ • Identifiers - A unique identifier (UUID format) used in con• Start/Stop - Time frame in which the promotion is available junction with other properties to uniquely identify a Beacon • Plans - The Plans available for the promotion - This is required

Beacons

Workflow

Define a list of unique beacons to be used in conjunction with other entities and serve the data to the end user. The following properties can be defined: • Name - ~ • Description - ~ • UUID - One UUID defined for the Company in the previous step • Major/Minor - Properties that help in defining a unique Beacon. Multiple Beacons can have the same UUID, but require a unique Major/Minor combination • Latitude/Longitude/Altitude - used to Define the final location of a Beacon, once deployed

Creating the needed entities for a promotion require other entities to have been defined, or must be defined on the spot. A typical flow will follow the above order, defining each entity at a time. The entity administrators allow the administrator to defined new entities ‚on-the-fly’ inside other entities. For this, just click the ‚Add new’ button on the right side of the field. Entities can also be created by using the ‚Setup Entities’ section, accessible from the top menu. New entities created here will have to be linked manually once created, or by using the ‚Add new’ feature while defining them. The work flow, and required dependencies, can be resumed to the below: Application Bundles -> Company UUIDs -> Beacons -> Media -> Coupons -> Time Frames -> Plans -> Promotions.

Media

Backend API Calls

Used to define media entities served to the App user. Currently If somebody has a different backend and wants to use our supports images to be served with Coupons SDK for iOS and Android we can provide api calls. All api calls are using POST method and are performed using the server URL. Coupons The server URL can be configured in SDK to point to a different Defines the content that will be served to the end user. The address than following properties can be defined: https://connect.onyxbeacon.com/api.php • Name - ~ Request and response bodies are JSON objects. • Description - ~ • Message - A message shown to the user once the Coupon Fetching UUIDs is served On startup, the SDK will make a request to get the list of • URL - The user will be redirected to this URL when clicking proximity UUIDs that are configured for the bundle identifiers the served Coupon on the mobile device provided in request body • Media - The image shown on the coupon • Request {

Time Frames

“function”:”getUuids”, “parameters”: { “identifier”: “com.example.app”, “installid”: “device identifier string” }

Define time frames used in conjunction with Plans and Coupons. A certain Coupon will be served to the end user in that specific time frame, for a give Plan } The following properties can be defined: • Name - ~ • Response • Description - ~ [‘uuid1’, ‘uuid2’] • Coupon - The Coupon to use • Start - The moment the coupon becomes active Response contains an array of proximity UUIDs for which the • Stop - The moment the coupon is no longer active app should monitor the regions. www.todaysoftmag.com | no. 23/May, 2014

47


programming ONYXBEACON IBEACON MANAGEMENT PLATFORM Requesting content for BeaconsSolicitare {

}

“function”:”getContentForBeacons”, “parameters”: { “identifier”: “com.example.app”, “installid”: “device identifier string” “beacons”: ( { “uuid”: “uuid1”, “major”: “majornumber”, “minor”: “minornumber” }, { “uuid”: “uuidn”, “major”: “majornumber”, “minor”: “minornumber” }, ... ) }

This call can be used for any content type defined by the developer. • Request {

“function”:”setMetrics”, “parameters”: { “identifier”: “com.example.app”, “installid”: “device identifier string” “couponMetrics”: { “couponid”: “cid”, “couponaction”: “opened” } }

We will be part of Techsylvania, the conference organized at the hackathon there where we will also have our beacons and our SDKs which you can use to build great mobile user experiences.

• Response The response is a JSON object received from the server and the structure is defined by the developer.

User Metrics • Request The user information dictionary is provided by the key userMetrics in parameters dictionary. {

}

“function”:”setMetrics”, “parameters”: { “identifier”: “com.example.app”, “installid”: “device identifier string” “userMetrics”: { “userkey”: “uservalue”, ... } }

User information sent to backend is developer specific. The Sample App uses user Facebook information for further processing and analytics on the server.

Coupon Metrics Coupon metrics call is specific for mobile marketing module and can be used to notify backend on user actions. The information is contained in couponMetrics dictionary and has two keys: • couponid - id of the content provided by backend • couponaction - one of the opened or tapped values is set depending on the user action

48

no. 23/May, 2014 | www.todaysoftmag.com

Bogdan Oros bogdan@onyxbeacon.com Co-Fondator @Onyx Beacon


HR

TODAY SOFTWARE MAGAZINE

For real!

W

e have expectations from ourselves to be rational beings, maybe with very few exceptions. We certainly have these expectations from everybody else. This is most visible at the organizational level.

People HAVE TO behave rationally/ reasonably/logically. We expect us, but mostly everybody else, to analyze the situations objectively, decide rationally the course of action and then implement it according to the plan. We expect to be computing machines that use predictable and accurate algorithms and operate with little or no flaw. We expect this from our colleagues, the team, the leaders, clients and everybody in between. Complete input, clean storage, objective processing and complete output. I will not get into details as there is not enough space. I remember that only the introduction course at Psychology was one year long. I will however try to cover some things that will get us to RETHINK THE THINKING. I believe that if we manage to get a glimpse on how we operate, we will end up being better off than if we continue to expect and pretend the „machine-like” operating. Let’s start by admitting that reality will never be perceived objectively. The mere perception distorts the data even if only to store it. To be able to store the data for later access we need to connect it with other data. When we perceive we do it by filtering the data. Those filters choose what data they will store and after that they encode the data they got through (something like 011101000110100001101 0010110111001101011). This is surely just the beginning because as the data stays in storage they don’t exactly stay, they interact, they change, they disappear or even create new data. As if there is not enough data contamination yet, the moment we search for, retrieve and extract it, the data gets modified again, selectively accessed and

even created. We have NO CHANCE of perceiving, storing and accessing CLEAN DATA. Think about working with this kind of thing. We do it, every day! A fraction of this represents the Cognitive biases. There are a lot of them; however, we will talk about those that are most pervasive in organizational life. For an extended list I invite you to wikipedia „list of cognitive biases” and link from link from there on. Since March just went through, and April actually, most organizations had the Quarterly Performance Reviews. Those awkward situations where employees and their leaders get together to argue about performance. Performance management systems have enough issues built-in them, but let’s assume for a moment that they are ok (for the sake of this article). The biggest issue is that they are used by people and for people. Here we have a new flood of issues. The principle behind the Performance Management Tools is that if we record performance and we feedback those results to the people, we can influence future performance. Which seems plausible only if we consider the recording and the feedback as rational/objective: • Performance can be clearly defined • Performance can be objectively measured • Having a feedback on how you perform impacts future performance

Well, here we go! Go? Not really! The people that provide data to that process have serious operating issues (we will call them biases) Cognitive biases are tendencies to think in a certain way. They represent a tendency

to systematic deviations (including errors in statistical judgment, social attribution, and memory).

THE FEEDBACK PROVIDER: 1. The most obvious and intrusive error in PM actions is linked to the memory error. We talked about it in the opening of this article (the data contamination) because it messes up the actual material we’re working with (information from the past) 2. Fundamental attribution error - The tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior. This means in PM context that we will assume that the cause for performance is more about personal traits (personality, ability, motivation, ambition, intent, attitude, and so on) and less to do with luck or context. This is most visible if you count the number of times the verb to be finds its way into the speech. (You need to be more responsible, you were really proactive) 3. Negativity bias - Psychological phenomenon by which humans have a greater recall of unpleasant memories compared with positive memories. Negativity effect - The tendency of people, when evaluating the causes of the behaviors of a person they dislike, to attribute their positive behaviors to the environment and their negative behaviors to the person’s inherent nature. Observation selection bias - the effect of suddenly noticing things that were not noticed previously – and as a result wrongly assuming that the frequency

www.todaysoftmag.com | no. 23/May, 2014

49


HR For real! has increased. These three biases influence the actual quality as score and the content of the feedback session. They also bring into the conversation words like always or never (you’re always late, you never listen, etc.). Sounds familiar? Don’t just look at the negative side. This general assessment doesn’t always have to be negative; we can also jump to positive bias, positive effect. 4. Confirmation bias - The tendency to search for, interpret, focus on and remember information in a way that confirms one’s preconceptions. With this, each of the party will operate with very different data during the PM session, which will lead to undesirable outcomes. 5. Empathy gap - the tendency to underestimate the influence or strength of feelings, in either oneself or others. These super-powers we attribute to ourselves leads us to assume that we are immune from bias due to how we feel. These may affect ratings for different people, or ratings at different moments. 6. Hot-hand fallacy - The fallacious belief that a person who has experienced success has a greater chance of further success in additional attempts. I would call this the “good student bias”. You remember how, if you were a high grades student, all the teachers would have a tendency to give you higher grades or in case of a wrong answer find excuses and ignore that behavior? How about this one in the organizational setting. THE FEEDBACK RECEIVER: 1. We have the tendency to „overreport” behavior or characteristics that are desirable and „under-report” the undesirable ones. This influences both parties in the PM talk. (social desirability bias). 2. I l l u s o r y s u p e r i o r i t y Overestimating one’s desirable qualities, and underestimating undesirable qualities, relative to other people. This is most visible in feedback that is linked with review of the performance in a group task. 3. Self-serving bias - The tendency to

50

claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests. This is most visible in the expectations regarding the content of the feedback, we expect for the assessor to see the way we see things. 4. Egocentric bias - Recalling the past in a self-serving manner, e.g., remembering one’s exam grades as being better than they were, or remembering a caught fish as bigger than it really was. Well … you see how this plays out. 5. Spotlight effect - The tendency to overestimate the amount that other people notice your appearance or behavior; Illusion of transparency - People overestimate others’ ability to know them, and they also overestimate their ability to know others. These two lead to the false expectation that the assessor will use more data than it actually uses when rating performance. False consensus effect - The tendency for people to overestimate the degree to which others agree with them. This leads to things like: “everybody thinks this”, “everybody noticed”. The list is long and I invite you to avoid the google effect when you read it. Considering all this I am not surprised that it takes a huge effort to start the PM activities and complete them. I would procrastinate them for as long as I could or I would perceive this organizational habit as being useless or unnecessarily painful. Well, you managed to cover this article and to think a bit about all these biases. Remember when remembering this article that there are also these three:

identify more cognitive biases in others than in oneself.

Conservatism (Bayesian) The tendency to revise one’s belief insufficiently when presented with new evidence. WHAT TO DO ABOUT THEM? Only ONE THING: remember that performance assessment is never about the past. Because of this, it’s never about WHO’S RIGHT and WHO’S WRONG. Nobody is right and nobody is wrong, and that’s awesome. The only reason why we do PM activities is to LEARN FROM THE PAST and figure out what worked so we can keep doing that and what didn’t work and how we can change it. The most used phrase in PM activities is „WHAT WILL THE FUTURE LOOK LIKE?” and „WHAT DO WE NEED TO GET THAT FUTURE?”. All this can happen when there is a positive and human relationship between the two parties. If we talk about the past, about who is right and who is wrong we get GUILT and SHAME, even if we win the argument. These feelings lead to helplessness, withdrawal, defense. People need to feel empowered to change their behaviors for the better, both parties. Cognitive biases will always be there, there is no way to get around them. But we don’t need to. If we change the focus it won’t matter because they will contaminate data and people will focus somewhere else.

Reactance The urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice

Antonia Onaca

Bias blind spot

consultant

The tendency to see oneself as less biased than other people, or to be able to

no. 23/May, 2014 | www.todaysoftmag.com

anto@aha-ha.com


programming

TODAY SOFTWARE MAGAZINE

I

Template Engines for Java Web Development

n many projects we deal with situations when we have to process text, generate reports, scripts or source code. Many times these problems can be solved with the help of certain tools called template processors. However, the most common scenario for using template processors is that of web development, where the need to separate business logic from the presentation layer was identified. When it comes to web applications developed using Java technologies, JavaServer Pages was the standard for the presentation layer for a long time. Among many other things JavaServer Pages has the features of a template processor. The biggest shortcoming of this technology is the possibility to insert business logic in the presentation code. Thus, we can insert into the JSP document scriptlets or even Java code blocks. Although this can help us in certain situations, the code becomes rapidly complex and hard to maintain. Moreover, this approach doesn’t follow the MVC design pattern. Because of shortcomings such as this one, JSP lost many followers in favor of other template engines, which are used in more and more projects, increasing the developers’ productivity and the quality of the products. The projects that we will talk about help us to easily create dynamic content, combining the templates with the data model in order to produce result documents. The templates are written in a templating language and the resulting documents can represent any formatted text. Besides the well-known processors, such as Apache Velocity and FreeMarker, during the recent years a variety of new template engines was launched. In this article we will analyze four products available freely for commercial use, trying to make a comparison between them.

Velocity Template Language Apache Velocity enjoys its own scripting language called Velocity Template Language (VTL) which is powerful and flexible. The authors of Apache Velocity proudly advertise that their product’s flexibility is limited only by the user’s creativity. VTL was created to offer the simplest and cleanest way for inserting dynamic content into a web page. Velocity uses references to encapsulate dynamic content into web pages, and variables represent one of these references types. These are able to reference objects defined within the Java code or can receive values inside the web page through a VTL declaration. Such an instruction begins with a # character as we can see in the following example: #set( $magazineUrl = ”http://www.todaysoftmag.com/” )

Spring MVC Integration

When we plan to develop a web application using Java technologies, one of the first things we do is to choose a MVC framework. Experience has proven that Spring MVC is one of the most popular web frameworks, and in this article we will analyze the use of template engines from their integration point of view. In order to illustrate a few features of each presented processor, we will develop a simple web application which will contain two pages: one that displays a list of products and another that displays the details of a product.

Spring MVC natively supports the template engine we are talking about and consequently their integration is a trivial process. Assuming that we are using Maven for creating the project, we choose the maven-archetype-webapp archetype from the org. apache.maven.archetypes group. Without listing all necessary Maven dependencies, we have to mention that we need the following artifacts from the org.apache.velocity group: velocity and velocity-tools. The source code of the example project can be downloaded from the following URL: https://springvelocity. googlecode.com/svn/trunk. After we have declared in web.xml the DispatcherServlet servlet which will handle all requests and we have defined the servlet-context.xml file with all its Spring MVC specific elements, we can declare the beans needed for working with Velocity. First of all we need a bean with the id velocityConfig to whom we will transfer the relative path where the templates for the application’s pages will be placed:

Apache Velocity

<property name=”resourceLoaderPath” value=”/WEB-INF/ views/”/>

Spring MVC and Template Processors

Velocity1 is a project distributed under the Apache Software License, which benefits from great popularity among the Java applications developers. Although it is often used within the web context, we are not limited to using the product only for this kind of projects. It can be used either as a standalone utility for source code and reports generation or as an integrated component of other systems. Like other template engines, Velocity was designed to enable the webmasters and the Java developers to work in parallel. Thus Velocity helps us separate the Java code from the web pages’ code, offering a good alternative to JSP. 1 http://velocity.apache.org/

Then we declare a view resolver which receives several properties. The class that will represent the type of this bean has to implement the ViewResolver interface and its main purpose is to find the views by name. The most interesting property of this bean is layoutUrl. The value of this property is the name of the template that will set the general layout: <property name=”layoutUrl” value=”layout.vm”/>

Velocity has also the ability to cache the templates that are being used. This is specified by the cache property. Now that we have configured the application so that the presentation layer is represented by Velocity templates, we can see what these templates look like. The most interesting part of the www.todaysoftmag.com | no. 23/May, 2014

51


programming Template Engines for Java Web Development template that determines the general layout is the presence of the $screen_content special variable. This variable will contain at runtime the processing result of the template that corresponds to the view returned by the Spring MVC controller. In our application’s case there is only one controller which is able to return either the list view or the details view. The template corresponding to the list view is list.vm and has the following content:

FreeMarker offers complex functionalities designed for assisting web developers. It was designed for efficiently generating HTML pages, and not only that. Furthermore as the creators of the project state, this is a generic software product that can be used for text generation, ranging from HTML to source code.

FreeMarker Template Language

For the template description, FreeMarker provides us with a strong templating language, called FreeMarker Template Language. With FTL we can define expressions, functions and macros within the templates that we write. Furthermore, we also have the possibility to use a rich library with predefined directiIn the above block we can notice how a collection is iterated ves that give us the possibility to iterate data collections, include and how the objects’ properties from the model are accessed other templates, and much more. We will continue by presenting through VTL. an example of a FTL directive that assigns a value to a variable: <ul> #foreach($product in $products) <li><a href=”/product/${product.id}”>${product.name}</ a></li> #end </ul>

Advantages and Disadvantages Being one of the most popular projects as far as template processors are concerned Velocity benefits from a well-established community support. Also the documentation from the official website is rich and there are many articles that answer various questions. In addition, there are a few books dedicated to Apache Velocity or books that discuss a few features of it. If these resources are not enough, there is a mailing list with a rich archive that we can subscribe to. There is a variety of aspects that can convince us to choose this product for our next project: Velocity is a powerful, flexible and feature rich processor. There are some developers that state this is the most powerful tool of this kind on the market. Another strong point of this project is that besides Velocity Engine it is composed of several subprojects: Tools (tools and infrastructure elements useful for web applications development and more), Anakia (XML documents transformation), Texen (text generation), DocBook Framework (documentation generation), DVSL (Declarative Velocity Style Language –XML transformations). When it comes to IDE support, Velocity benefits from a series of plug-ins developed by members of the community. These plugins are dedicated to a few IDEs such as Eclipse, NetBeans, IntelliJ IDEA, to name only the most popular. Many of these plug-ins offer syntax highlighting and even auto-completion. For developing the example presented in this article we used Velocity Edit for Eclipse. As we showed in a previous paragraph Spring MVC integration is easy. Also the Spring MVC distribution contains a library with macros for binding support and form handling. These tools are valuable for web applications development. Although it is the most popular template engine within the Java universe, Apache Velocity didn’t benefit from any release from November 2010 when the 1.7 version of the core was released. The project is still active but the community seems to be pleased with the functionalities already implemented and there is no release planned yet. Velocity is a project heavily contributed to, becoming a complex product, intimidating for new users. The syntax is somehow cumbersome and writing templates without help from an IDE that supports the VTL syntax can be a nightmare.

<#assign magazineUrl=”http://www.todaysoftmag.com/“>

We can clearly see that in the FreeMarker specific templating language, the predefined directives are called by using the following syntax: <#directivename parameters>, the macros can be called like this: <@macro parameters> and the expressions are written in this manner: ${expression}.

Spring MVC Integration Just like in Apace Velocity’s case, FreeMarker benefits from support for Spring MVC integration straight from the web framework’s creators. This way, the Spring MVC distribution offers us a ViewResolver implementation, but also a macro library for binding support and form handling. By using the same way to create a Spring MVC + FreeMarker application as with Apache Velocity, we can see that the only modifications that we need to do are in the Spring MVC configuration file and in the templates. Actually, we will see that this is true also when we will talk about the Thymeleaf and Rythm projects. Also, the Maven dependency freemarker from the group org. freemarker is needed. The source code for the example project can be downloaded from the following URL: https://springfreemarker. googlecode.com/svn/trunk. In the servlet-context.xml file we replace the configuration bean and the view resolver bean. The most interesting aspect of these XML elements is the property that has the auto_import key, which allows us to import the macros into all our FTL files. The macros are defined in the spring.ftl file, which is offered by the Spring MVC distribution. They are accessible through the the spring alias: <prop key=”auto_import”>/spring.ftl as spring</prop>

The template that belongs to the list view is represented in the list.ftl file. Its most relevant part is the following: <#include „header.ftl” /> <h2>Products</h2> <ul> <#list products as product> <li><a href=”/product/${product.id}”>${product.name}</ a></li> </#list> </ul> <#include „footer.ftl” />

In this template we can see how the content of another template can be included, how we can iterate over a collection of FreeMarker2 is a product that has reached maturity, distribu- objects, and how we can write an FTL expression. ted under a BSD license. Similar to the Apache Velocity project,

FreeMarker

2 http://freemarker.org/

52

no. 23/May, 2014 | www.todaysoftmag.com


management Advantages and Disadvantages The FreeMaker project benefits from an extensive documentation, the official site consisting of a rich user manual from which both programmers and designers can extract a lot of useful information. Also, there is a discussion mailing list, but the users are encouraged to look for help on Stack Overflow, by asking questions marked with the „freemaker” tag. Although FreeMaker is a popular template engine, up till now a single book dedicated to it has been published. FreeMarker offers us a complete templating language and is relatively easy to comprehend. It can be said that the competition between it and Velocity is pretty tight, being a product that has reached maturity, ready to be integrated in enterprise projects. Furthermore, similarly to the project presented previously, FreeMarker offers template caching mechanisms. On the official page there is a list of plug-ins for different programming environments. For the example presented in this article we’ve used the plug-in that is part of the JBoss Tools Project for Eclipse. This offers syntax highlighting, indicators for syntax errors, code completion for macro names and bean properties. It must also be noted that the available plug-ins for FreeMarker are not as „strong” as those written for Apache Velocity. Actually, the NetBeans dedicated plug-in doesn’t seem to work for version 7, although online it is indicated that it supports versions greater or equal to 6. Just as in Apache Velocity’s case, FreeMarker integrates well with Spring MVC. As we have seen in a previous section, Spring MVC offers not only an implementation for ViewResolver but also a library with macros for binding support and form handling. The project is active, the most recent version 2.3.20 being published on June 2013. We have seen that the project shows roughly the same advantages like the template processor presented prior to it and we can say also when talking about disadvantages that they are alike. Due to the fact that it has been developed for several years it has become complex, and can seem difficult to master to new users.

Thymeleaf

Thymeleaf3 is a Java library distributed under the 2nd version of Apache License with the main goal to create templates in an elegant way. The most suitable use case for Thymeleaf is generation of XHTML / HTML5 documents within a web context. However this tool can be used in offline environments as well, being able to process XML documents. Thymeleaf ’s creators offer us a module for Spring MVC integration which gives us the possibility to use this product for the visualization layer of our web applications thus being a substitute for JSP. Thymeleaf relies on a set of features called a dialect. The standard distribution comes with the Standard dialect and the SpringStandard dialect which allow us to create so-called natural templates. These can be correctly displayed by the browser even when we access them as static files. Consequently these documents can be viewed as prototypes. If we need other functionalities beside the predefined ones, Thymeleaf allows us to create our own dialects. A dialect offers functionalities such as expressions evaluation or collections iteration. The Thymeleaf core is built upon a DOM processing engine that is the project’s own high performance implementation. With this mechanism’s help it makes an in-memory representation of the templates and then it performs some processing based on the 3 http://www.thymeleaf.org/

TODAY SOFTWARE MAGAZINE current configuration and the provided data set, traversing the nodes.

Standard Dialect and SpringStandard Dialect These two dialects have the same syntax and the main difference between them is the so-called expression language that is being used. While the Standard dialect uses OGNL (Object Graph Navigation Library), the StandardSpring dialect has the Spring Expression Language integrated. Also the SpringStandard dialect contains a series of small adaptations for better using certain functionalities offered by Spring. The syntax used by Thymeleaf in its templates can be noticed in the following example: <a href=”#” th:href=”@{‚http://todaysoftmag.com/tsm/ en/’}” th:text=”${websiteTitle}”>Today Software Magazine</a>

When the above template is processed by Thymeleaf it evaluates the expressions figured as values to the attributes located within the th namespace and replaces the values of the classic HTML attributes with the processing result. For us to benefit from the document’s validation we need to declare the namespace like this: <html xmlns:th=”http://www.thymeleaf.org”>

Spring MVC Integration As we mentioned earlier Thymeleaf offers us a module for Spring MVC integration, available both for Spring 3 and Spring 4. For us to benefit from this module we added in the pom.xml file of our example the thymeleaf-spring3 dependency from the org.thymeleaf group. The source code of the example project can be downloaded from the following URL: https://springthymeleaf. googlecode.com/svn/trunk.

Advantages and Disadvantages Thymeleaf is a project that draws in more and more users and also programmers that contribute to its development. The official website offers us a variety of resources that help us get familiar with this product. Moreover the existing documentation teaches us how to extend Thymeleaf with our own dialects. Besides these tutorials we have at our disposal articles on various subjects, a users’ forum and an interactive tutorial. Although at the moment of this writing there are no books written about Thymeleaf, we can find many articles about it. Also we can access the thymeleaf and thymeleaf-spring Javadoc API documentation. This template processor comes with a different philosophy from Velocity and FreeMarker, a thing that can be noticed also in the syntax of the Standard and SpringStandard dialects. Thymeleaf stresses out the concept of natural templating, which offers us the possibility to create static prototypes. Along with the release of Thymeleaf 2.0 the mechanism for processing the templates was replaced, increasing its performance. Also there is a system for caching the templates. As far as integration with IDEs is concerned there is a plug-in for Eclipse that offers auto-completion. Unfortunately there is no support for other IDEs. Similarly to the other projects we talked about the Spring MVC integration is easy to be achieved since the authors of Thymeleaf have created a special module for this task. The project benefits from frequent releases, the current version being 2.12.RELEASE. This is available to the public since December 2013. Thymeleaf offers a wide range of functionalities but the tests www.todaysoftmag.com | no. 23/May, 2014

53


programming Template Engines for Java Web Development show that the processing of the templates takes longer than for as friendly as it can be, we don’t have assistance from our prefersome of its competitors. red IDE for writing the templates, an important aspect for many developers. Rythm Rythm allows us to insert Java code inside templates, but this Rythm4 is a template engine for Java applications distributed is one of the aspects that determined us to give up JSP in favor of under the Apache License version 2.0 and described by its author other template processors. Thus we see this thing as a minus of as being a general purpose product, easy to be used and super- the project. fast. Similarly to other template processors, we can process with it HTML documents, XML documents, SQL scripts, source code, Performances emails and any other kind of formatted text. Rythm was inspired There are few tests available that are relevant for determining by the .Net Razor project, due to its simple and elegant syntax. the performance of the template engines presented in this artiUsing the same praising words, the author claims that the cle, but in the following paragraphs we will show you the results number one goal of the project is the user experience. From the of such studies. Using the benchmarking tool5 we obtained the API to the templates’ syntax, the product is characterized by sim- following results for 10000 requests: plicity. Also the product was designed so that it is easy to be used • Velocity: 3.8 seconds by Java programmers. • FreeMarker: 4.8 seconds • Thymeleaf: 43.2 seconds The Syntax of the Template Processor • Rythm: 3 seconds Rythm uses the special character @ to introduce all syntax elements. This thing is illustrated in the following block of code: Another test 6, where Rythm wasn’t included, shows that Velocity and FreeMarker performed almost identically while @for (product: products) { <li><a href=”/product/@product.getId()”>@product.getThymeleaf was again among the slowest engines. Name()</a></li> Thus Rythm seems to be the fastest template engine among the } ones that we talked about in this article, Velocity and FreeMarker In order to be able to use Spring MVC model objects within strive for second and third place, while Thymeleaf obtained the the templates we have to use the following notation: poorest performance. @args java.util.List<ro.cdv.model.Product> products

Spring MVC Integration For integration with Spring MVC we have at our disposal a third-party library, available as a Maven artifact with the spring-webmvc-rythm id and the com.ctlok group. Having this dependency in our project we can declare the necessary beans in servlet-config.xml. One of rythmConfigurator bean’s properties is mode, with whose help we can specify what mode we are running the application in: dev or prod. The source code of the example application can be downloaded from the following URL: https:// springrythm.googlecode.com/svn/trunk.

Advantages and Disadvantages As opposed to Velocity and FreeMarker, Rythm processes the templates transforming them into Java bytecode. Due to this fact at runtime their processing is very fast, placing this project among the fastest template engines of the Java universe. Although it’s not as extensive as the documentation of the other products we presented, the documentation of this project is broad, allowing us to acquire the necessary skills for developing web applications using Rythm Template Engine. On the project’s website there is a series of tutorials both for programmers and webmasters and with the help from the dedicated Fiddle instance we can write Rythm templates and visualize the result on the fly. Being a relatively young project, the community around it is not yet developed. Rythm is able to operate in two modes: dev (development) and prod (production). Thus in dev mode the templates are reloaded every time when they are modified to shorten the development time; on the other hand in prod mode they are loaded only once to increase performance. One of the project’s weak points is the fact that for the moment it doesn’t offer any plug-in for IDEs. Thereby although its syntax is 4 http://rythmengine.org/

54

no. 23/May, 2014 | www.todaysoftmag.com

Conclusions

Although we saw that the projects we presented offer similar functionalities, following this discussion we can draw a few conclusions. Both Velocity and FreeMarker are established products, which proved their worth in many successful projects offering decent performance. On the other hand Thymeleaf and Rythm are young projects that come with a new philosophy adapted to the current trends in web development. For instance, Thymeleaf excels at natural templating, while Rythm offers a clean syntax, easy to understand both for programmers and webmasters. We can conclude that choosing a template engine depends first of all on the project we need it for, every processor we discussed about being worth to be given a try.

Dănuț Chindriș

danut.chindris@elektrobit.com Java Developer @ Elektrobit Automotive

5 https://github.com/greenlaw110/template-engine-benchmarks 6 http://www.slideshare.net/jreijn/comparing-templateenginesjvm


legal

TODAY SOFTWARE MAGAZINE

How to protect a good business idea

M

any of you have dreamt of being entrepreneurs. And maybe, one day you had a great business idea. So, you entered a partnership with one or several friends skilled in a specific field and… you set up a startup.

‘Entrepreneur’ and ‘startup’ are two fashionable words. They are repeatedly used in almost all relevant conferences and meetings, where it is emphasized that any startup must be planned in detail and in perspective - from concept, business model, financing, development to how to benefit of SEO and social media – all in view of gaining success. But far too seldom do they stress the importance of a strong legal component of the startup initial strategy – beginning with choosing the proper legal business form, the registration, protection and exploitation of any intellectual property rights and continuing with specialized consultancy on contracts, etc. You may not know it but any business owns (or should own, in order to be viable) a portfolio of at least a few minimal intellectual property assets – a www domain name, a commercial name, a logo, a trademark (maybe even an unregistered one), know-how, trade secrets - which all offer a competitive advantage. Moreover, depending on the type of activity carried out by the startup, intellectual property may entail more sophisticated assets: copyright in computer programs, in mobile apps, in graphics or in video games, legal rights in commercial databases, licenses, patents, etc. As a whole, all these represent one of the most valuable assets of the startup and they should not be neglected; on the contrary, they must be protected - enhancing, thus, their value. Your business gains a higher chance to succeed when its intellectual property portfolio is a valuable one. And its value – both for any investors to whom you present the business idea you wish to implement and for any potential partners – is also given by the manner in which such intellectual property assets are protected. So, one legitimate question arises: what can entrepreneurs do to protect their business idea and their intellectual

property portfolio, when pitching and pursuing financing from various sources - given that business ideas are not protected by copyright (as one might mistakenly believe)? Most of the times, in a pitch, people focus on starting discussions in order to establish all aspects of an eventual partnership (financing from a business angel, joint-venture, etc). Implicitly, there will be a disclosure of information regarding the business idea in itself, possible key-elements of intellectual property that are at the foundation of the business, etc. – information that should remain confidential and should not be used without your prior written consent. In the optimistic scenario, after this “match-making” phase, one might enter a successful “marriage” materialized in a contract that will detail, among others, aspects regarding: project implementation, all financial claims (including those of the investor or of the trading partner) and exploitation of intellectual property. However, for the pessimistic scenario where such a partnership is not materialized, it is recommendable to take proper steps in order to protect the information you share. You can achieve this, for instance, by signing a Non-Disclosure Agreement (NDA) before beginning the discussions. A NDA can send a positive signal of trust in your own ideas (the fact that you are taking your business and business plan seriously). And it should stipulate – as accurately as possible – which confidential data, information and documents you are going to share, along with all obligations of the recipient and his contractual liability if a breach occurs, etc. Of course, you should find the rightful middle way between the legitimate wish to protect your business idea and the investor’s / partner’s (un)willingness to sign a NDA; in practice, there are frequent cases

when potential investors or partners refuse to sign a NDA (due to various reasons). Some of them have set as a policy not to sign a NDA, and others accept to sign it only when they advance with the discussions and only if they are interested in entering the partnership or to offer the financing. You have to keep in mind that as attractive the perspective of collaboration with such a sponsor might be, things can always turn ugly. Therefore, you should take into consideration, especially in the negotiation phase, to sign a NDA - even if sometimes this may seem unrealistic from the point of view of the investor or of the partner. And when this principle cannot be put into practice, think about how generous (or not) you should be with the requested information – such information is your most valuable possession and, once disclosed, you might lose your competitive advantage. In this case as well, your ability and that of your legal advisor to tackle this problem with the investor might just be the way out of the deadlock. In a nutshell, a solid business cannot be built without some legal tools that are correctly and efficiently used – when required, with proper help (for instance, for drafting a good NDA). So, do not play about this issue, as it can cost you just… a business.

Claudia Jelea

claudia.jelea@jlaw.ro ro.linkedin.com/in/claudiajelea Lawyer @ IP Boutique

www.todaysoftmag.com | no. 23/May, 2014

55


management

G

Gogu and the water bottle

ogu leisurely screwed the cap on the water bottle, made sure it is well sealed, screwed it once more, which was completely useless, but it was obvious he was doing all that while his mind was wandering elsewhere. Misu saw he was thoughtful and probably thought it was the best opportunity to pick on him, as it had been long since they had a verbal fight and he didn’t want to lose touch: “Hoo, brrrr…” he said quite loudly, staring at Gogu. But he didn’t react, so, Misu repeated it, this time even more loudly: “Hooo, brrr…. stop it!” He was disappointed he was completely ignored by Gogu and, somewhat at a loss, he looked for support from his colleagues. He noticed that everyone was looking at him curiously and an idea occurred to him: “So, you’ve heard me, but you do not deign to grumble out a reply. Isn’t that a sign of lack of respect? Why are you doing this, Gogu?” “Three times Hip and once Hooray, that’s why,” replied Gogu, but couldn’t help smiling. For the Workers’ Day, the 1st of May, or – to be more specific – to give you the chance to use sophisticated words, to twist your tongue… what was it? To deign?!” “Well, I’m glad you’re a smart one, and nothing twists in your brain,” Misu burst, but kept calm, without any trace of anger. “But there’s nothing to twist,” he added more silently; then, he suddenly asked what he actually wanted to know: “Tell us, where were your thoughts wandering, ‘cause you seemed gone… And that is something we cannot afford right now, after Dan’s unexpected leave.” “Yes, that’s what I was thinking about. His departure shatters us a bit; he was the only one who knew all the ins and outs of this new project. And there is nothing documented; all the experience from the previous projects is locked up in his mind… and that’s about it. Well, we may find something left in the Inbox/ Outbox, but it cannot help us too much. What good is it to gather experience if you cannot share it with the others? I mean, how can it help the organization, ‘cause it’s obvious it helps you, on the personal level.” “What is it that helps or doesn’t help, guys? What are you talking about?” As usually, Chief had appeared without anyone noticing him. “Chief, you’ve got to stop these undercover entrances, without letting us know…” “Well, all right, next time I’ll wear some

56

bells. Tell me, what’s the trouble?” “There is no trouble, Chief, we were just thinking…” “I was thinking, that is”, pointed out Gogu. “You and me, that’s us,” Misu went on, untroubled. “How can you keep the experience and knowledge acquired in the previous projects, within the department, or the company?” “Oh, I see you’ve talked to Dan’s team but didn’t get much.” “We got a little, but not many palpable things. The problem is that each one of them is now involved in other projects, Dan has left, and we have no document to help us. It’s as if we have to start from scratch again.” Gogu grumbled, displeased: “It’s a little annoying. We have all sweated our guts out and we will continue to sweat our guts out. No matter what you’ll say, Chief, this sounds like something stupid…” “It’s not stupid, Gogu; it is a sign we are evolving and we want to do things in a better way. We have encountered a problem, let’s solve it: let’s see what we can do with the acquired knowledge. Of course, they belong to the person who is working and going through certain situations. They are personal experiences which, in time, help the one who acknowledges and understands them – to become better and better, to become more competent. The company also becomes better and better, as it is seen as the sum of individual skills. Along with the growth of the company, it will want – as we do right now – to lock as much of the experience thus gained, so as to no longer depend on each individual, taken separately.” “Let’s do it then,” Gogu jumped to conclusion. “What’s stopping us?” “Hooo, brrr…”, Misu chopped in. “It isn’t that simple, Gogu. Tell me, can you ride a bike?” “Your neurons are riding bikes; none of them has stayed at home. What’s the big idea with the bikes, now?”

no. 23/May, 2014 | www.todaysoftmag.com

“Can you or can’t you? Answer me.” “Yes, I can.” “Well, then, explain in written how you do that, especially the part where you maintain your balance and I promise eating the document,” concluded Misu, victoriously. “To eat”, Gogu corrected him, but meanwhile he had put on his thinking cap. He could, obviously, describe how you get on the bike, how you get off the bike, how you get up after having fallen… “Eating, to eat, whatever. Well, can you explain it or not?” “What, then? Do we just give up?” Gogu was confused, looking towards Chief for support. “Precisely, if things had been simpler, we would have solved them long ago, right? Knowledge and experience are like water, if you do not bottle it, it leaks out. Our problem is to find the bottle in which to collect this water,” Chief laughed. “In part, we have solved that: we have the project plans, we have the progress reports which often help us in the following projects, too. What is now left is to see how we can bottle the bike riding. But this is neither easy to explain, nor are the people willing to do it. After all, this is what differentiates them, and the “bottling” makes them easier to replace.” “Hmm, the thought is not very nice”… Gogu was thinking aloud, but he quickly added: “from the point of view of the individual.” “Which means we need to find the compromise solution to help both the company and the individual. The successful companies have solved that, we’ve got to solve that, too. As I was saying, Gogu, it is a matter of evolution, not something stupid.” “Yes, it would be stupid to leave it like that and find ourselves in the same situation next time…” Simona Bonghez, Ph.D.

simona.bonghez@confucius.ro Speaker, trainer and consultant in project management, Owner of Colors in Projects



sponsors

powered by


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.