Issue 22 - Today Software Magazine (english)

Page 1

No. 22 • April 2014 • www.todaysoftmag.ro • www.todaysoftmag.com

TSM

T O D A Y S O F T WA R E MAG A Z I NE

How to win the game of Test Automation? iOS image caching. Libraries benchmark

Ingenuity, Perseverance and Connectivity BDD, Javascript and Jasmine Startup-ing without money: Startcelerate investment model Inside view of GPS Navigation Getting started with OpenXML AOP using Unity

Perspectives on Object Oriented Design Principles Why does it take you so long to finish a task? The Web’s Scaffolding Tool For Modern Webapps – Yeoman IIMAGINE - a study regarding Cluj IT companies Machine learning in the Cloud



6 CCC A ‘shot’ of programming George Platon

7 Welcome to Techsylvania! Vlad Ciurca

8 Cluj Innovation Days, 20-21 march 2014 Andrei Kelemen

10 Ingenuity, Perseverance and Connectivity Dhyan Or

11 Startup-ing without money: Startcelerate investment model Tudor Bîrlea and Gabriel Dombri

14 How to win the game of Test Automation? Mihai Cristian

18 iOS image caching. Libraries benchmark Bogdan Poplauschi

21 THE WEB’S SCAFFOLDING TOOL FOR MODERN WEBAPPS – Yeoman Răzvan Ciriclia

23 Getting started with OpenXML Florentina Suciu and Gabriel Enache

26 BDD, Javascript and Jasmine Bogdan Cornianu

30 Why does it take you so long to finish a task? Gabriela Filipoiu

33 Requirements Engineering using the Lean methodology Radu Orghidan

36 Perspectives on Object Oriented Design Principles Cătălin Tudor

38 Machine learning in the cloud Roland Szabo

40 Rapid Application Development for the Web with Oracle APEX George Bara

42 Inside view of GPS Navigation iOS Skobbler team

44 IMAGINE - a study regarding Cluj IT companies Dan Ionescu

46 Lumy.ro usability testing Daniela Ferenczi

47 Improving - why bother? Tibor Laszlo

50 AOP using Unity Radu Vunvulea


editorial

A

Ovidiu Măţan, PMP

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

t the beginning of April, I took part in …even mammoths can be Agile, as a participant and organizer. It was a good opportunity to connect to the pulse of the community and to remember the Agile principles, which are very important when carrying out innovative projects. Recently, a new law was enforced, to bring the presence of drones under regulation. Practically, you are not allowed to fly the drone in residential areas, the access being legal only in open spaces, filming is not allowed, but there is no restriction on taking photos from the air. I do not wish to make comments regarding the provisions of the law, but in so far as the small sized drones are concerned, the ones that weight less than 1 kg and are addressed to the large public, the provisions of the law are within the accepted limits of good sense. However, unfortunately, in only a few days, what was perceived as a fantastic technology which many people admired and wished to acquire, turned into something forbidden. Until not long ago, the first question I would get when I was flying the drone was about its cost; now, the first question is whether it is legal or not. The lack of accurate information and the pleasure of exaggeration have always been the cause of a deformed perception. In the case of drones, one of the effects of this false perception is the limitation of curiosity and innovation of the large public in this new technical achievement, namely, the drone. And this is happening while we are witnessing a race of the one who will succeed in employing it commercially on a large scale, and the best known example for this is Amazon. Beyond the negative consequences, this is a good lesson for those in charge of teams and companies, an urge for a better analysis of the impact of restrictive laws. Overbidding the incident prevention role, specific actually to any law, may lead to the daunting of entrepreneurial spirit. Coming back to the Agile conference, I remembered an extremely valuable thing: sometimes, we do not know what it is that we want right from the beginning and precisely for that reason, this methodology allows us to adapt according to the evolution of the project. The main theme of number 22 is cloud technologies, and this is obvious in the pages of the magazine. We begin with a few impressions from Cluj Innovation Days. Then, we have an invitation for you to the programming contest organized by Catalyst CCC and to Techsylvania. We have talked to Vlad Ciurca, the organizer of Techsylvania, who has promised, during the first day of the event, a hackaton where we will be able to program exotic devices such as Google Glasses, and on the second day, we will be able to attend a series of interesting discussions on entrepreneurship and technology. From Israel, we get acquainted to a Nobel Prize laureate and a great entrepreneur in Ingenuity, perseverance and connectivity. Startcelerate promises us, in the future, startups with no troubles for the entrepreneurs from the point of view of implementation, and for the companies, a connection to the local innovation and that from UK. How to win the game of Test Automation? opens the series of technical articles. It suggests a very interesting level hierarchy of those who automate application testing and it also suggests a work platform. The efficiency of caching images on the iOS platform is approached and demonstrated through a few benchmarks. Enhancing productivity is analyzed in the article on Yeoman. To those who are curious about the content of an Excel xlslx file, I recommend the article called Getting started with Open XML. BDD, Javascript and Jasmine presents in detail what Behaviour Driven Development is and its implementation. In the management area, we include the articles Why does it take you so long to finish a task?, Requirements Engineering using the Lean methodology and Performance management in the project oriented organizations in Romani. You can find an analysis of aspect oriented programming from the Shannon entropy perspective in Perspectives on Aspect Oriented design principles, and Machine learning in the cloud presents online solutions for the automated learning from the perspective of artificial intelligence. We end with IMAGINE – a study of IT in Cluj, a research into the perception of Cluj companies from the students’ point of view.

Ovidiu Măţan

Founder of Today Software Magazine

4

no. 22/April | www.todaysoftmag.com


Authors list

Editorial Staf

Gabriel Dombri

Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): Marius Mornea marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com

gabriel@startcelerate.com

Dhyan Or

do@socialrehub.info CEO & Co-founder @ Social ReHub

Co-founder @ Startcelerate

Andrei Kelemen

Roland Szabo

Executive director @ IT Cluster

Junior Python Developer @ 3 Pillar Global

andrei.kelemen@clujit.ro

roland.szabo@3pillarglobal.com

Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com Translator: Roxana Elena roxana.elena@todaysoftmag.com

Tudor Bîrlea

Vlad Ciurca

Co-founder @ Startcelerate

Product Guy. Tech Events Producer. Connector @ Techsylvania

tudor@startcelerate.com

Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com Reviewer: Adrian Lupei adrian.lupei@todaysoftmag.com Accountant : Delia Coman delia.coman@todaysoftmag.com Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207

vlad@techsylvania.co

Radu Orghidan

Cătălin Tudor

Requirements engineer @ ISDC

Principal Software Engineer @ Ixia

Gabriela Filipoiu

Gabriel Enache

Software Engineering Analyst @ Accenture

Software engineer @ Fortech

radu.orghidan@isdc.eu

gabriela.filipoiu@accenture.com

Răzvan Ciriclia

razvan.ciriclia@betfair.com Software engineer @ Betfair

ctudor@ixiacom.com

gabriel.enache@fortech.ro

Mihai Cristian

mihai.cristian@hp.com Test Automation Engineer @ HP

Claudiu Cosar

Bogdan Poplauschi

Software engineer @ 3Pillar Global

Senior iOS Developer @ Yardi Romania

claudiu.cosar@3pillarglobal.com

bogdan.poplauschi@yardi.com

George Platon

George.Platon@catalysts.cc

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code www.todaysoftmag.ro www.todaysoftmag.com

Software developer @ Catalyst

Florentina Suciu

florentina.suciu@fortech.ro Software engineer @ Fortech

George Bara

Tibor Laszlo

Business Consultant @ SDL

Partner & Consultant @Improving-IT

gbara@sdl.com

tibor.laszlo@improving-it.com

www.todaysoftmag.com | no. 22/April, 2014

5


event

CCC - A ‘shot’ of programming

S

hort, intense and ever growing in popularity, CCC (Catalysts Coding Contest) has become a sort of ‘seasonal’ attraction for the programming enthusiasts from (but not limited to) Cluj. The Catalysts Coding Contests (http://contest.catalysts.cc) started in the year 2007 in Austria with a relatively low number of contestants but the feedback which came after was nothing short of „awesome”! Currently, the 18th edition of the contest will be hosted in Romania, Austria and India. CCC has gained notoriety in Cluj since 2011, when it first set foot on Romanian soil. Why notoriety? Mostly because of its unique and innovative structure. Each contestant (or team) has 4 hours to solve one problem consisting of 7 levels. There are no programming language restrictions! Ever yone can pick their preferred language or paradigm in which to code. The teams can be of maximum 3 people. The feedback gotten for those problems is diverse but converges towards “it was not easy!” The difficulty evidently grows as you progress through the levels, the level 7 being usually achieved by only a handful of people. The number of participants constantly grew as the years followed both in Cluj and in the other locations, reaching a total of 600 contestants in the last CCC which was held in October 2013. The participation conditions are simple: • Team of maximum 3 people all using the same computer. • No programming language restrictions. • No resources restrictions (internet, books etc). • The contestant must register both online (before the contest) and at the spot if he chooses to partake in the ‘non-online’ version of the contest.

Example problem: The Harvester

At the first level, Dave the farmer is presented as having a field, segmented in equal parts on X lines and Y columns. Each segment has a number associated to it ranging from 1 to X*Y. Initially, the first requirement is to find the solution which traverses all the segments in the fastest time possible. One of said solutions is to go on a serpent-like pattern (see picture below)

At the second level a constraint is imposed: the tractor cannot always leave from the top left (or bottom right) corner! At the third level, the tractor must be able to move both from one column to the next (E → V and reverse) and from one line to the next (N → S, S → N) The problem grows in complexity as one progresses through the levels, more tractors kicking in and more and more possibilities for those tractors to work together arriving. The Catalysts Coding Contest has proven to be a great success both for the participants, who gain important insight on solving difficult problems and finding new people to share their passion with, and for us, as a company.

The CCC platform (https://catcoder.catalysts.cc) offers the possibility of online participation, the user not being obliged to be physically present at any location, however, this way, he or she will not be able to receive any prizes regardless of how well they did. More testimonies & interviews can be found on our media channel:

CCC in India!

After a few months of planning, on the 9th and 15th of March 2014, we were glad to host our first CCC contests in India, in the cities of Kolkata and Kharagpur! The number of contestants was of approximately 350 from which around half chose to participate in a team of two (more details on www.contest.catalysts.cc/en/)

6

no. 22/April, 2014 | www.todaysoftmag.com

George Platon

George.Platon@catalysts.cc Software developer @ Catalyst


event

TODAY SOFTWARE MAGAZINE

C

Welcome to Techsylvania!

luj-Napoca is strategically well positioned in Eastern Europe due to its proximity to Belgrade, Budapest, Sofia, Kiev and Bucharest. Over 4,000 technology and computer science students graduate from Cluj universities each year which, when coupled to the creative arts, caused Huffington Post to rank it third among cities that are expected to shake up the art world. Cluj-Napoca was selected as the European Youth Capital 2015 and has been declared Europe’s most hospitable city by the European Commission. The local IT scene is well established within the CEE landscape. Initiatives that boost innovation and technological development will firmly establish it as Eastern Europe’s equivalent to the Silicon Valley. In this context Techsylvania was naturally created, the biggest tech event in the region and one of the largest in Central and Eastern Europe. The first edition will be held from 31 May to 2 June and has invited the region’s foremost creative minds, technologists and innovators to participate in our event, where inspiration, ideas and knowledge is shared directly between individuals leading to projects and collaborations. Listen to disruptive technology speakers deliver insights into what the future may actually be like, be inspired by successful entrepreneurs that usually got where they are by making tons of mistakes, have coffee with investors, a beer with marketing professionals, or simply recharge your battery by socializing with your peers! Techsylvania is a complex event that starts on May 31 with a 24-hour hackathon on wearable and connected devices, focused on attracting the vast pool of talent available in the city and continuing on June 2 with a one day conference that will bring to the audience some of the most respected professionals in the field of international technology . The Techsylvania Hackathon targets developers who want to set up potentially disruptive applications on connected and wearable technologies . In addition , the 100+ participants will have the opportunity to test different devices available through the courtesy of the program partners . During the hackathon ‚s 24-hours , over 100 developers will have the opportunity to work on different types of mobile devices and use connected technology to develop their own applications and products. The participants can come with their teams or they can join another team and aim to build a functional application during the event. Finally , each team will present its product to the audience to convince the public of its potential . The best projects will be awarded and the top three teams will receive admission tickets to the conference which will take place on June 2 and , moreover , will have a chance to present their products to the entire audience of Techsylvania . The second part of the event is the conference where more than 200 people are expected, which will bring on stage outstanding speakers from around the world. Technology enthusiasts will have the opportunity to be inspired, to meet potential investors, partners or contributors, and moreover, to interact directly with people with similar interests and passions. Techsylvania stands out from the landscape of local events because of the quality of the speakers who will take the stage for the event to share their experiences and discuss technology innovation and product development . Among them are : Marcus Segal (Ex- COO Division Zynga Casino & Entrepreneur in Residence at

The Summit , strategy and operations manager with over 15 years experience in technology ) , Paddy Cosgrave ( Founder of The Summit and F.ounders , Bloomberg described as „ Davos for geeks „ and organizer of the annual meeting of the CEOs of 250 major technology companies worldwide ) , Jack Levine ( Founder and CEO Electric Objects startup based in New York City to develop a connected screen brings special digital objects in the homes of users ) , HP Jin (Co - Founder and CEO Telenav , the global leader in location services , car navigation and targeted advertising based on location , company that acquired Skobbler early this year) or Aryk Grosz (Co - Founder and CTO Mixbook Inc . , an online photo storage service that has received numerous awards so far and has been mentioned in publications such as the New York Times , USA Today and the Today Show in the U.S. ) . Although the first edition, the event promises to revolutionize the local tech ecosystem and make an important contribution to its development. Organized by Vlad Ciurca and Oana Petrus, Techsylvania is supported by an experienced team and an exceptional board of advisors. The results achieved so far by the people behind the project is a guarantee for its success: the team members have organized 3 successful editions of Startup Weekend Cluj, have developed promising business communities including Romanian Managers Cluj with over 800 members and Maramures Business Club, and initiated educational events for students in # defineCluj. Technology, innovation and potentially disruptive products enthusiasts should not miss Techsylvania’s first edition ,which will take place between 31 May and 2 June in Cluj-Napoca! More details about the program and event invitations are available on the Techsylvania website http://techsylvania.co and the first 25 participants receive a special offer being able to purchase 2 tickets for the price of one. Subsequently, early bird ticket price will be 69 EUR, and registrations can be made online on the event website. Techsylvania addresses all technology innovators and hobbyists who want to talk directly to some of the best professionals in the region, learn and connect with people who can make a difference. If you are one of them, see you at Techsylvania! Vlad Ciurca

vlad@techsylvania.co Product Guy. Tech Events Producer. Connector @ Techsylvania

www.todaysoftmag.com | no. 22/April, 2014

7


event

Cluj Innovation Days, 20-21 March 2014

I

think you chose the best moment to create this platform (Cluj Innovation City) and you will have all my and the European Commission’s support to materialize such value-added ideas. I thank you and learn that you can count on my support.” Mr. Dacian Ciolos, European Commissioner

The second edition of Cluj Innovation Days (CID) was, by all accounts, a success. During the two days Cluj-Napoca became the capital of innovation in Romania. We have managed, once again, to ascertain our city as one of the places where important things are happening. For the readers who are not aware about what CID represents I will try to summarize in the following paragraphs what we have in mind and how this year’s event went down. Cluj Innovation Days is an international annual event focused on encouraging innovation, research and entrepreneurship as key ingredients towards building sustainability in businesses and community development. Our long term commitment is to make this event an international landmark for partnership opportunities among academia, business and public authorities where innovation and technological transfer are trusted cross-sectorial bridges.

8

no. 22/April | www.todaysoftmag.com

This year’s event was organized by Cluj IT Cluster and aimed to bring together all the above, more exactly researchers and students, government and leaders and entrepreneurs. During the two days of the event more than 400 participants were involved in conferences and panels dedicated to the role of innovation and entrepreneurship in socio-economic development. The event was graciously hosted by the Agricultural Sciences and Veterinary Medicine of Cluj-Napoca, one of the most venerable, yet dynamic higher education institution in Romania. I will leave to you to judge whether we have succeeded by looking at the facts and figures of the conference. Speakers: • 51 speakers : 22 Plenar y, 12 Mastering Innovation, 8 Fostering Ent repreneurship, 9 Showcasing Innovation

• 13 International speakers, • 2 European Commissioners, • 4 Directors and other delegates from European Commission. • 18 hours of presentations and discussions Attendance during the 2 days of the conference • 200+ participants from business • 130+ participants from academia • 40+ public officials • 40+ students In terms of content CID2014 has been very varied and organized on 4 main tracks: Plenary, Mastering Innovation, Fostering Ent repreneurship and Showcasing Innovation. Many of our speakers and participants have expressed their positive feedback in regard to the content and organization of the event. During the first


TODAY SOFTWARE MAGAZINE day of the conference, we had important keynote speakers and dedicated messages such as from mr. Dacian Ciolos, European Commissioner at the DG Agriculture and Rural Development, mr. Johannes Hahn, European Commissioner at the DG Regional Policy, mr. Mihena Costoiu, Delegated Ministry for Higher Education in the Romanian Government and a number of other important officials from state and local governments. The Mastering Innovation track has tried to capture the essence of a unique and very important process, how to generate valuable ideas which can lead to products for the real economy. During the track speakers and participants debated the key components of the management of innovation and the ways of exploiting its results by generating growth and profit. During the second day of the conference we scheduled two parallel tracks. Under the Fostering Entrepreneurship track we discussed what entrepreneurship and intrapreneurship are, how to run

an innovative startup or how to manage a spin-off, how to make sure that ideas get the needed financing and how to take business globally. The Showcasing Innovation track has tried to motivate the audience with a series of success stories. Individuals behind such stories shared their experience with us. The media plan we set up for the event has generated a lot of coverage in the press. According to our PR agency there were twice as much takeovers as the average for a conference this size. Media coverage highlights: • Media activities: 4 Press releases / 5 Interviews/Information Requests; • Press conference: 24 national publications present and more than 30 journalists; • Publications/TV: 10 offline publications – print / 170+ online publications / 4 TV shows/news; • Total Media Coverage value has been estimated at 80.422 euro.

The event had its own website, www. clujinnovationdays.com, since the moment of its lunch and until right after the event we had 1857 unique visitors and a total of more than 10.200 page views. I would like to end by thanking to all our partners, especially BRD Group Societe General, Microsoft and Huawei. A special note goes to our hosts, the Life Sciences Institute Building at the USAMV Campus. And of course, with the invitation for the next year’s edition of Cluj Innovation Days.

Andrei Kelemen

andrei.kelemen@clujit.ro Executive director @ IT Cluster

Life Sciences Institute Building (USAMV Campus)

Plenary - Blue Room - Life Sciences Institute Building

Track – Green Room - Life Sciences Institute Building

Dinner Reception - Casino Building

www.todaysoftmag.com | no. 22/April, 2014

9


interview

T

Ingenuity, Perseverance and Connectivity

hanks to the recent expansion in flight routes from Cluj, I can now take a 15 minute drive to the local airport, then get on a plane to virtually anywhere in Europe or the Middle East, and arrive there that same day in time for a meeting. On my recent visit to Israel I had the chance to meet some remarkable people: a Nobel laureate of great renown and a famed venture capitalist who is also a social activist. I first met Dan Shechtman, who was awarded the Nobel Prize in Chemistry three years ago, at the Israel Institute of Te chnolog y. Profess or Shechtman has been organizing an open course on entrepreneurship, exposing students to the concept and practice of starting their own business, and inviting local businessmen who started from zero and went to become leaders in their respective industries to tell their stories. Dan Shechtman His own story is a curious one, he first discovered what is now known as „quasi-crystals” in 1982, but nobody thought it was serious. Every scientist, from Shechtman’s own research to the prominent two-times Nobel laureate Linus Pauling, who was quoted referring to Shectman, saying: „There is no such thing as quasicrystals, only quasi-scientists.” Shectman kept believing in his discovery regardless of the opposition, and his work is an example of empirical self confidence and perseverance. Almost thirty years later, he managed to prove his critics wrong and win the Nobel prize. He has since been constantly on speaking tours across the world, and his schedule is filled for a long time in advance. When I asked him to come and speak in Romania, he could only offer dates in 2016. I met Erel Margalit last month in a startup event. He was talking about his initiatives, as a recently elected member of parliament, to empower less fortunate communities in Israel. Margalit is better known through his successful venture capital firm Jerusalem Venture Partners, which managed over $1B in investments turning them into $17B, boasting some of the most memorable exits in Israel’s startup history, such as QlikTech, XtremIO, CyOptics, Netro Corp, Chromatis, Precise and Cogent. Part of the money he earned f rom investments in technology, Margalit channels Erel Margalit to social and cultural projects in Jerusalem, focusing on rough neighborhoods and poor families. He has also opened a performance club and a startup incubator next to it, stating that these IT professionals need to be connected to musicians and artists from other disciplines in order for them to be creative. Margalit says that Jerusalem should attract young people by offering high tech jobs, startup funding, cultural life and

10

no. 22/April | www.todaysoftmag.com

Margalit says he is passionate about finding where people and communities excel, and building a cluster around a certain industry where they have a chance to compete globally. When he was in his late twenties, he met with the Mayor of Jerusalem and suggested to position the history-rich but economically poor town as a high-technology center, and invite international players to build their R&D centers in Jerusalem. He is now doing the same for the less known desert city of Be’er Sheva. The idea is to make it a global capital for cyber-security, attracting the likes of Deutsche Telekom, IBM, Lockheed Martin, Orcale and EMC, who will invest, together with JVP and the Ben Gurion University in Be’er Sheva’s „Cyber Spark”. Be’er Sheva happens to be a sister city of Cluj-Napoca. It is the seventh largest city in Israel, with just about 200,000 residents, and with more than 20,000 university students, many of them attracted from other parts of the country. For many years, the city was neglected and left out of national development budgets, young people moved to Tel Aviv and never came back, and the population became increasingly older and poorer. But in recent years Be’er Sheva managed to gain renewed interest, thanks in part to a new dynamic mayor and a vibrant university campus. The city was recently connected to Tel Aviv, which is 113km away, by a fast train which takes less than an hour, and allowing more exchange of ideas and human capital between the center and the periphery. Back in Cluj, I can hardly avoid drawing a comparison between the two sister cities. For one, finding a theme where the IT community in Cluj could innovate and compete globally, attracting international players as partners and investors, would make the city attractive to its residents and to other Romanians and internationals. Secondly, I would love to be able to get from Cluj to Bucharest in a reasonable time, either on a super-fast train, a good highway or a low cost airline. Making travel accessible and affordable will allow people to go for meetings in the capital in the morning, perhaps see a concert in the evening and get back home that same night. It will also bring more people and ideas from Bucharest, to the benefit of both cities.

Dhyan Or

do@socialrehub.info CEO & Co-founder @ Social ReHub


startups

TODAY SOFTWARE MAGAZINE

Startup-ing without money: Startcelerate investment model

W

e all know it: startups have become a hot trend, especially in the last 7-8 years, with success stories which amaze us and give us a proper rush. If you are into tech, Silicon Valley is undoubltely a benchmark and a promised land for the entrepreneurs from anywere. And for good reasons. Cloning, more or less, the same model, places like London, Tel Aviv, Berlin, Tallin or even Paris have started to become more and more prominent as European epicentres for founding and developing startups. In a typically Transylvanian pacing, Cluj seems to follow the same model.

The Silicon Valley model

With all its swinging innovation, the tech entrepreneurship ecosystem has at its centre a rather liniar and hardly challenged framework. There is a whole company dancing around a common, flaming hope: to create something (a product or a business) as quickly as possible in such a way that an exit would get everyone involved - everyone took the considerable risk of sponsoring such a venture – a good return on investment. The typical cast from such a variety show would include: the founders, investors at various levels (from Angel to VC), incubators and accelerators, and a whole range of service providers, from hosting to legal. The usual flow in such a framework is equally liniar, even though many would want it scalable: the investors throw some money at some startups thus allowing the founders to test some of their hypotheses (may it be for building a demo, for product upgrades, or getting some traction), then the startups rush for a growth race to gather some validation data so that they could go for a next funding round and experiment some more in finding – as Steve Blank elegantly said it – a scalable and profitable business model. Once a round is done, the second starts rather quickly, till the startup has rather toughed the holy ground of a successful exit or gets lost in the zeromoney-in-the-bank neverland. The basic ecuation of risk is perfectly compensated by the formula of value here: the earlier the external resources enter a startup, the bigger the coresponding value (in terms of equity equivalent) is. Such a mechanism works smoothly at the top of the spectrum (where a bunch of startups bring major benefits to a handful of daring investors and funds), but is practically bankrupt for the rest (the grand majority of startups are either slowly dying by under-funding or hit hard bottom on their way to find a working business model). Even so, for this model to work, two conditions have to be accomplished: 1)

enough venture capital, available as early as possible, and 2) a significant number of entrepreneurs eager to take up the risk of plunging into the deep end.

only if there is hard proof that the startup has got a clear ROI formula and even a sheduled date for break-even in their calendars. These settings, wise and sound as they The European Model: A limping clone of would seem from a risk perspective, make Silicon Valley little sense when applied to the startup In all its major lines, the European world and they rather work like road blocks startups ecosystems implemented the same structure and c omp one nt s , w it h two major exceptions: venture capital is at a significantly lower level in Europe and, even more important, the risk tolerance that underlines any investment decision is completely different. Let’s dig into this a bit. The fundamental difference between the European investors in startups and their We s t C o a s t c o u ter-parts is that the former will get some money out of their pockets only if the development stage of the startup is not that early and its validation data is rather plenty. These investors, with few exceptions, don’t risk seed money for launching a demo or testing the founders’ guesses. They expect t hat t he fou nd e rs would have swallowed all that intial risk and done that already. Many of these inverstors have an even healthier habit: they take their wallets out www.todaysoftmag.com | no. 22/April, 2014

11


startups Startup-ing without money: Startcelerate investment model than anything else in this race. current investment models, Startcelerate And here comes Startcelerate, as a new has a few solutions: framework for speeding up the early deve• Since you still have already paid lopment of startups, based on a flexible for resources, why not make a proinvestment model that matches together a per investment with them? The whole few resource areas that haven’t been coutalent crisis and implicit war on talent in pled systematically till now. Romanian IT&C, a company has rarely a 100% occupancy rate for its resources. Startcelerate: forget about money and And when it has it, it is probably sold get your development resources directly at the smallest possible price. Which Seven months ago, when we began really raises an eyebrow regarding the working at this project, we started from the opportunity costs. With Startcelerate, following idea: how about connecting the any development resorce can be turstartups in need of development directly ned into an investment; indeed, a risky, with strong software companies that had teeth-crunching investment as far as development resources, in such a way that accountants are concerned, but on a scathey would collaborate and launch faster lable ROI model, compared to the purely some products to test in the market. This liniar revenue model these companies way, the software companies will use resowork with now. urces that they don’t fully use (and have • Investors can make informed decialready paid for) in creating an investment sions. Startcelerate will allow you to portfolio and aquire a wild card to get out search, analyse and compare startups of the typical scalling issues any such combased on a formalised profile and accorpany has. ding to a series of relevant metrics tied In such a partnership-investment up to the type of startup and its phase of structure, the risk is rather low for the development. The decision-making prosoftware companies, while for the founders cess for investing gets a new and essential the opportunities they get are way above area of specific and actionable data. the incurring costs in those early stages. • Step-by-step and agile investments. To mitigate risk, the invested resources Startcelerate solutions can be scheduled in small batches tied Startcelerate is set up as a platform for to some pre-established milestones, alternative investments in startups, where where an accomplished objective fires software companies (initially) can use their up the next investment. Practically, the internal resources to provide solutions for startup founders establish an objective, startups in exchange of equity. From their the tactics to get there and the necessary side, this means that startups can access resources for getting there, while the directly a whole range of development entire progress will be recorded within resources, in such a way that all the usual the startup account on the platform. issues from the classical fundaraising pro• Effective and flexible investment cess can be avoided. As far as an invested contracts. On the legal side, Startcelerate resource can be translated in money value, proposes an agile and effective contracit can become an equity round. ting flow, using a series of security-type To overcome the shortcomings from the contracts, customisable stright away

12

no. 22/April, 2014 | www.todaysoftmag.com

within the platform. Such contracts will give the initial investors the options to buy shares in a future equity round, at a pre-established price.

Why Cluj might have an edge with Startcelerate model

With its technology infrastructure (both those a few dozens software companies and the academic environment that provides fresh talent every year) and its relevant outsourcing history, Cluj seems to have a head start in the Startcelerate framework. The local software companies could use Startcelerate either as an investment tool or as a way into an innovation area that is many times difficult to cultivate inside a company whose main focus is to sell development hours to its clients. A more prominent startup local community comes to add up another positive mark here. These are the main reasons for choosing to launch Startcelerate – a London-based investment startup otherwise – in Cluj, in a pilot event in the second part of May 2014. This event will bring together local software companies and Romanian startup founders in an intensive pitch-and-develop 3 days event. For more details, visit www. startcelerate.com. Tudor Bîrlea

tudor@startcelerate.com

Co-founder @ Startcelerate

Gabriel Dombri

gabriel@startcelerate.com Co-founder @ Startcelerate


communities

TODAY SOFTWARE MAGAZINE

A

IT Communities

t the end of March, Today Software Magazine has participated to couple of events. It’s about Cluj Innovation Days and ..even mammoths can be Agile, at the last one been part of the organizers. In April we will have a series of events like the presence of Adam Bien in Cluj, Project Tango Hackaton in Timișoara or Product Inception in Bucharest.

Calendar

Transylvania Java User Group Community dedicated to Java technology Website: www.transylvania-jug.org Since: 15.05.2008 / Members: 567 / Events: 43 TSM community Community built around Today Software Magazine. Website: www.facebook.com/todaysoftmag Since: 06.02.2012 /Members: 1321/Events: 18

April 10 (Cluj) Launch of issue 22 of Today Software Magazine www.facebook.com/todaysoftmag April 12 (Cluj) BattleLab Robotica 2014 www.facebook.com/BattleLabRobotica April 12 (Cluj) Squirrly Hackathon it-events.ro/events/squirrly-hackathon/

Romanian Testing Community Community dedicated to testers Website: www.romaniatesting.ro Since: 10.05.2011 / Members: 730 / Events: 2

April 14 (Cluj) Transylvania JUG - Adam Bien transylvania-jug.org/future-meetings/ meeting-47-cu-adam-bien

Cluj.rb Community dedicated to Ruby technology Website: www.meetup.com/cluj-rb Since: 25.08.2010 / Members: 176 / Events: 40

April 14 (Cluj) Startup Lounge - „hub:raum Incubating and Accelerating Innovation” facebook.com/events/282160681951576/

The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: www.agileworks.ro Since: 04.10.2010 / Members: 425 / Events: 63

April 24 (Cluj) Let’s meetup and discuss about project management meetup.com/PMI-Romania-Cluj-NapocaProject-Management-Meetup-Group/ events/173955922/

Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: www.meetup.com/Cluj-Semantic-WEB Since: 08.05.2010 / Members: 178/ Events: 26 Romanian Association for Better Software Community dedicated to senior IT people Website: www.rabs.ro Since: 10.02.2011 / Members: 238/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: tabaradetestare.ro Since: 15.01.2012 / Members: 294/ Events: 28

April 25 (Timișoara) 1st ever Project Tango Hackathon it-events.ro/events/1st-ever-project-tango-hackathon 1 May (Cluj) Unplug Cluj-Napoca facebook.com/events/145391158983499 May 7 (București) Android Testing and Continuous Integration it-events.ro/events/android-testing-continuous-integration/ May 8 (București) Product Inception it-events.ro/events/product-inception/ May 15-16 (Cluj) Romanian Testing Conference www.romaniatesting.ro www.todaysoftmag.com | no. 22/April, 2014

13


testing

How to win the game of Test Automation?

N

Mihai Cristian

mihai.cristian@hp.com Test Automation Engineer @ HP

owadays, one of the first questions you are asked as a Quality Assurance Engineer, is do you use Automation? Test Automation has definitely been a hot topic in recent years. Everybody is talking about it, trying to use it, complaining about it, and still it remains very much an in house business, with each company developing its private solution tailored to its own products and needs. This is only natural as products vary a lot and, especially in the game of Enterprise Software, it is hard to imagine a single Test Automation solution that can be used regardless of the actual product being developed. This being said , working in Test Automation for a couple of years in which you can see firsthand the obstacles and the rewards of automation, you eventually come to ask yourself the obvious question: How do you win the game of Test Automation? This article looks into the key aspects of developing and implementing a Test Automation Solution. It provides a step by step guide based on an existing solution used by HP’s Server Automation Product called AXIS. While the focus of the article will be on backend automation and mainly functional testing, the ideas presented can be applied to any Test Automation Solution regardless of the technology behind it.

The Test Automation Solution So let’s get started. As with any game that you want to be good at, it is important to first understand how the game is played. Think of this as the instruction manual for our game.

14

no. 22/April | www.todaysoftmag.com

So what is a Test Automation Solution? A Test Automation Solution, in our vision, is made up of three key elements: • The Framework • The Content • The User Interface

The Framework

A Test Automation Framework is usually defined as a set of assumptions, concepts, and practices that provide support for automated software testing[4]. The Framework is the base of your Test Automation solution. It handles the resource allocation, test execution, option validation and environment management. It is very important to use a framework that is

www.todaysoftmag.ro | nr. 21/Martie, 2014


management suitable for the type of automation and the type of product you want to test. Our product facilitates the management of large data centers comprised of servers that have different OS flavors, providing all the tools necessary for a system administrator such as OS provisioning, package management, configuration management or compliance management. So, in order to run automated tests on such a product we need a framework that can handle a multi-platform environment. The AXIS solution is based on an already existing automation framework called STAF (Software Testing Automation Framework). STAF is an open source framework designed around the idea of reusable components, called services (such as process invocation, resource management, logging, and monitoring)[5]. It allows us to set up the needed prerequisites and check the desired outputs for our tests across all major platforms used today. This is a fortunate example that you don’t always have to start from scratch when implementing a Test Automation Solution. Automation tools come in many sizes, shapes and flavors and even when implementing your own solution they can offer a much needed helping hand. A key aspect you have to take into account when developing your framework is the Discovery process. What this does is simply translate the environment information from the product being tested to the automation framework. Discovery allows you to define a test environment by specifying the test targets and all other necessary information in a format suitable to your application before running the actual Tests.

The Content

The content is represented by the automated tests. A clear separation between the framework and the content can make your TA Solution applicable to more than one product. The tests themselves can be developed in the programming language of your choosing, as long as it is supported by the framework and the product of course. Developing content requires a good understanding of the product being tested so, ideally, QA Engineers should play a significant part in writing automated tests.

The User Interface

Having a framework and automated tests is of no use unless people can use them. One of the most encountered

TODAY SOFTWARE MAGAZINE problems with in house solutions is that only Automation Engineers know how to run the tests or interpret the results. As with any product that is only tested by the ones who developed it and know its strengths and weaknesses from the start, this approach will eventually lead to quality issues. Usability is one of the key features of any Test Automation Solution. The more people use it, the better it becomes. That is why it is important to have a user interface that allows people to run the tests easily and understand the results. The AXIS solution employs both a Graphical User Interface, based on a web server that allows users to select the tests they want to run and the test bed on which to run them and to view the test results in a simple tabular format where they can explore the test logs down to the test case level, as well as a Command Line Interface to back it up.

Playing the Automation Game

OK, so now that we know what the Automation game is about, we can start playing. This walkthrough will give you the necessary information to build your own Test Automation Solution. It is important that you don’t skip levels as you will most likely just end up having to go back and redo them at some point. Game on!

Level 1: Your character First things first, you need to choose what type of player you will be. Do you want to go towards GUI automation or will you choose backend automation? Or maybe you want to do both. Either way, it is important to know what approach you will be taking from the start. The most important thing at this level is to know the product. Some products may not be suited for backend automation, while others might not have a GUI to automate. GUI test automation often relies on record and playback tools, while backend test automation relies on the product having an exposed API that allows you to call its methods from your test cases. Make sure the choice you make is suitable for the product that is being tested. You can work with the development team to make the product more automation friendly, but this must be done early in the product lifecycle. Another thing you need to consider at this level is what part of the testing process you aim to automate. Do you want to automate only the test execution or also the

result validation? Some Automation solutions focus on the execution part leaving validation in the hands of QA while others do both. Going back to our example we find that AXIS is a good choice for Level 1. It focuses only on backend automation. Tests are written in Python and implement scenarios by calling methods from the products exposed API, replicating user behavior via scripts. Result validation is also automated with the tests verifying the method return values against a set of expected values and presenting the results to the user in a simple: Passed/Failed/Skipped format. Users can then drill down and check the logs to investigate failures if necessary. Once you have chosen your automation approach and verified that it works for the product being tested, you can move on to the next level.

Level 2: The Backstory Now that we know how we are going to approach the game it’s time to immerse ourselves in the game world. All worthwhile games have a backstory and if you want to understand the context in which your quest is taking place, it pays off to read a bit of the game lore before starting to slash everything left and right. So how do we do that? We need to review the existing testing process. Look at the manual test cases to make sure that they are written in a manner that makes them suitable for automation. Work with the QA team to put together a list of prime candidates for automation. The usual suspects will be the regression suites, smoke suites or product deployment tests. Identify key tests that upon failure will result in high priority defects and define from the start the notion of Passed and Failed tests. One more thing to do at this level is to establish a clear link between the manual and the automated test cases. This will help to determine the automation coverage later on. This level might not sound like a lot of fun but if you make this effort before you start automating, you will find that your results will be visible much sooner. The AXIS solution uses data files attached to each test that contain the information which links the automated test to the manual one. These are used to upload automated runs result to the content management system used by management. Think of this as a way of keeping score. You will definitely want people to know what level your character is and what quests you

www.todaysoftmag.com | no. 22/April, 2014

15


testing How to win the game of Test Automation? have completed. Once you have your list of automation candidates, pass/fail conditions and tests written in a format suitable for automation, you can move on to the next level.

Level 3: Choosing your Class So you know how the game is played and the backstory. It’s time to choose your character class. Choosing a class that is more suited to your approach will make your hero more effective. Do you want to be a melee fighter, a mage or a ranger? Each alternative has its strengths and weaknesses and will influence how you play the game. In this level you must define what you want your Test Automation Solution to do and how you plan to accomplish it. You should define clear requirements as to what your solution will do. This way you will have a clear standard by which you can measure your progress and you will know when your solution is ready to be deployed. Let’s say you want to automate the regression tests for your product. Regression tests are always prime candidates for automation. Having successfully completed levels 1 and 2 you should have an already defined list of tests that cover all areas needed in a regression run. You also know what the pass/fail conditions are and having reviewed the tests you know they are written in a format suitable for automation. All that is left is to define your requirements. Create User Stories for your automation effort. Provide estimates and done conditions. It is important to make your effort and your progress visible so that people know when automation becomes available and what areas it covers. A common mistake at this level is trying to do it all. There is usually a push to automate as many tests as possible. Automating tests blindly helps no one. By defining requirements you will know exactly what you commit to, when you will be done and how your tests will be used. Apply this process for all your automation goals: regression tests, build validation tests, product deployment tests. Once you have requirements defined for your automation tasks, you can proceed to level 4.

Level 4: Grinding Enough with all the backstory and character creation, It’s time to start playing. As any gamer will know, in order to have a truly powerful hero that can take on any quest you need to do some grinding first. This means going through the easier quests

16

in order to build up your skill tree, equipment and experience. You need to work your way to the big bosses in any game. There is no quick way to go through level 4. Having successfully completed level 3, you will have a set of clearly defined requirements, or quests, that you need to complete. It’s always a good idea to start with Smoke Tests. These should be the most basic tests so they should be easier to automate and can be used for tasks like build validation. Another good choice is to work on Deployability tests. If you have the ability to deploy the product and run a minimal suite of smoke tests on it, you can already set up a continuous integration process. The main objective on this level is to create value as soon as possible. One tip for this level is to try multiplayer. Experience grows faster when you are in a party. Work with feature teams to ensure the created tests are run and cover all the aspects defined in the requirements. They are the only ones who should decide when your quests are done. The more your tests are run the better they will become. Remember that, as an automation engineer, your job is to implement and provide support for the Test Automation Solution, not to run the tests for other teams and investigate results. Once your quests are done and your party members consider that you are worthy enough, you can advance to the next level.

Level 5: Gear is everything Having completed level 4, you now have some experience in the game of automation. You have completed some quests and you start to recognize the importance of having good gear. In order for your hero to be truly successful, you need to have the right items and to keep them in good shape. In this level you will also learn to keep different item sets for different quests So what are your items in this game? Simply put the only items you have at your disposal; they are your test suites. Here are a couple of attributes that you should definitely look to improve on in your tests: • Reviewability: Tests need to be easy to understand and debug. Make sure that they are well documented and that the logs contain all the necessary information to determine the cause of a failed test. Pay attention to tests that pass as well. Just because a test passes on every run does not mean it is a good test. It might not do any useful checks.

no. 22/April, 2014 | www.todaysoftmag.com

If a feature changes, make sure the tests that cover it can be easily reviewed and updated. • Accuracy: Make sure that when a test passes or fails, it does so correctly. Any test should have setup and teardown interfaces that allow you to enforce prerequisites and cleanup. Implement skip conditions when working on complex tests so that a prerequisite failure will result in a skipped test, not a failed one. The goal is to have an accurate picture of the products state, not a run where all tests pass. • Reliability: A test should have the same behavior every time it is run. Setup and teardown interfaces should ensure a consistent behavior across several runs. • Independence: Users will want the option to run either a single test (in order to verify a certain defect), a whole suite (determine the state of a given feature) or a group of suites (asses the state of the product at a given point). Give the users as many options as possible to ensure that the tests will be used to their maximum potential. • Reusability: In order to avoid duplicate code and to make your tests easier to both develop and understand, it is a good idea to work on some utilities. Grouping common functions used across a number of tests into utilities will make it easier to add new tests and to determine the cause of test failures. Make sure that your utilities are well documented (parameters, return values) so that they can be used with ease It is a good idea to use a versioning system for your test just the same as you do for your product. When working with multiple product versions, it’s good to have a version of your tests for each version of the product. Certain features may vary across product versions, so a single test might not be applicable. Now that your gear is up to speed, it’s time to start making a name for yourself in level 6.

Level 6: Crafting Sure, using your top level gear to slash away completing quests is fun but in order to advance in the game you will have to share your gear with other players. Crafting items for others will help build up a reputation and having strong party members will make your progress in the game much easier.


TODAY SOFTWARE MAGAZINE Level 6 is about sharing. Make sure your tests are available for other users. Package them so that they can be run from any platform. As we said in the previous level, versioning is good. Make use of your products versioning implementation. It’s safe to assume that you have one in place, so why not use for the tests as well. Having a new test package with every new build can help you with adapt your tests faster, especially in early development stages. Always remember that automation is only good if it is actually used. Providing easy access to the correct tests for all versions is what you need to accomplish at this level. So now you have completed your quests, built up your gear, crafted items for your fellow party members, what else is there to do?

Level 7: Guild Master This is the final level of the game. You are now a true guild master. Lower level players look up to you for assistance, items and tips. But does the game end here? Just because you are number one does not mean you will remain at this level for a long time. The nature of the automation game is always shifting. You need to be on your feet the whole time if you don’t want to be knocked back to lower levels. Always look to improve your Test Automation Solution. Keep your tests suites up to date. Add new tests, review and update existing ones and remove tests that are obsolete. Work on the automation framework to improve performance and reliability. Share the knowledge you have gained. Hold trainings to keep QA teams up to date with the state of the automation solution in place. A well designed and implemented

solution will allow QA engineers to develop automated tests themselves, freeing you up of working on enhancements. Provide support for teams that use automated tests by defining clear processes based on logged requests. Changes in the product will always result in new challenges for you, so try to communicate with product management and development teams to ensure that automation is taken into account when designing new features. Following this leveling guide should help to put into place a Test Automation Solution that can be used effectively and that provides useful and visible results. In the end, like in every game, if you study the game, plan ahead, put in the effort at all levels and don’t cheat, you will emerge victorious. So enough talk, let’s win the game of Test Automation!

References 1. Test Automation Architecture: Planning for Test Automation – Douglas Hoffman, 1999 2. Test Automation Frameworks – Carl J. Neagle, 2000 3. Common Mistakes in Test Automation – Mark Fewster, 2001 4. Wikipedia, The Free Encyclopedia 5. Software Testing Automation Framework (STAF) - http://staf.sourceforge.net/

www.todaysoftmag.com | no. 22/April, 2014

17


programming

management

iOS image caching. Libraries benchmark

I

n the past years, iOS apps have become more and more visually appealing. Displaying images is a key part of that, that’s why most of them use images that need to be downloaded and rendered. Most developers have faced the need to populate table views or collection views with images. Downloading the images is resource consuming (cellular data, battery, CPU etc.); so, in order to minimize this, the caching model was developed.

2. Classical approach Bogdan Poplauschi

bogdan.poplauschi@yardi.com Senior iOS Developer @ Yardi Romania

• download the images asynchronously • process images (scale, remove red eyes, remove borders, …) so they are ready to be displayed • write them on the flash drive (internal storage unit) • read from flash drive and display them when needed

// assuming we have an NSURL *imageUrl and UIImageView *imageView, we need to load the image from the URL and display it in the imageView if ([self hasImageDataForURL:imageUrl] { NSData *data = [self imageDataForUrl:imageUrl]; UIImage *image = [UIImage imageWithData:imageData]; dispatch_async(dispatch_get_main_ queue(), ^{ imageView.image = image; }); } else { [self downloadImageFromURL:imageUrl withCompletion:^(NSData *imageData, …) { [self storeImageData:imageData …]; UIImage *image = [UIImage imageWithData:imageData]; dispatch_async(dispatch_get_ main_queue(), ^{ imageView.image = image; }); }]; }

To achieve a great user experience, it’s important to understand what is going on under the iOS hood when we cache and load images. Also, the benchmarks on the most used FPS simple math image caching open source libraries can be of • 60 FPS is our ideal for any UI update, so great help when choosing your solution. the experience is flawless • 60 FPS => 16.7ms per frame. This means that if any main-queue operation takes longer than 16.7 ms, the scrolling FPS

18

no. 22/April | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE will drop, since the CPU will be busy doing something else than rendering UI.

3. Downsides of the classical variant

• loading images or any file from the flash drive is expensive (flash drive access is significantly slower than accessing the RAM) • creating the UIImage instance will result in a compressed version of the image mapped to a memory section. The compressed image is small and cannot be rendered. If loaded from the flash drive, the image is not even loaded into memory. Decompressing an image is also expensive. • setting the image property of the imageView in this case will create a CATransaction that will be committed on the run loop. On the next run loop iteration, the CATransaction involves (depending on the images) creating a copy of any images which have been set as layer contents. Copying images includes: • allocating buffers for file IO and decompression • reading flash drive data into memory • decompressing the image data (the raw bitmap is the result) – high CPU usage • CoreAnimation uses the decompressed data and renders it • improper byte-aligned images are copied by CoreAnimation so that their byte-alignment is fixed and can be rendered. This isn’t stated by Apple docs, but profiling apps with Instruments shows CA::Render::copy_image even when the Core Animation instrument shows no copied images • starting with iOS 7, the JPEG hardware decoder is no longer accessible to 3rd party apps. This means our apps are relying on a software decoder which is significantly slower. This was documented by the FastImageCache team on their Github page and also by Nick Lockwood on a Twitter post.

need to purge the memory because of low memory conditions. In this case, re-loading the images from the flash drive is a lot faster than downloading them. Note: if you use NSCache for the memory cache, this class will purge all its contents when a memory warning is issued. Details about NSCache here: http:// nshipster.com/nscache/ • store the decompressed image on flash drive and in memory to avoid redoing the decompression • use GCD and blocks. This makes the code more performant, easier to read and write. Nowadays, GCD and blocks is a must for async operations • nice to have: category over UIImageView for trivial integration. • nice to have: ability to process the image after download and before storing it into the cache.

Advanced imaging on iOS

To find out more about imaging on iOS, how the SDK frameworks work (CoreGraphics, Image IO, CoreAnimation, CoreImage), CPU vs GPU and more, go through this great article by @rsebbe.

Is Core Data a good candidate?

Here is a benchmark of image caching using Core Data versus File System. The results are recommending the File System (as we are already accustomed to).

5. Benchmarks

Just looking at the concepts listed above makes it clear that writing such a component on your own is hard, time consuming and painful. That’s why we turn to open source image caching solutions. Most of you have heard of SDWebImage or the new FastImageCache. In order to decide which one fits you best, I’ve 4. A strong iOS image cache component must: benchmarked them and analyzed how they match our list of • download images asynchronously, so the main queue is used requirements. as little as possible Libraries tested: • decompress images on a background queue. This is far from • SDWebImage being trivial. See details http://www.cocoanetics.com/2011/10/ • FastImageCache avoiding-image-decompression-sickness/ • AFNetworking • cache images into memory and on flash drive. Caching on • TMCache flash drive is important because the app might be closed or • Haneke

Objective C

jobs-cluj@yardi.com Yardi Romania

www.todaysoftmag.com | no. 22/April, 2014

19


programming iOS image caching. Libraries benchmark iPhone 4 results

Legend

Note: AFNetworking was added to the comparison since starting with iOS7, due to NSURLCache, AFNetworking benefits of flash drive caching.

Scenario

• async download = support for asynchronous downloads directly into the library • backgr decompr = image decompression executed on a background queue/thread • store decompr = images are stored in their decompressed version • memory/flash drive cache = support for memory/flash drive cache • UIImageView categ = category for UIImageView directly into the library • from memory/flash drive = top results for the average retrieve times from memory/flash drive cache

For each library, I made a clean install of the benchmark app, then started the app, scroll easily while all images are loaded, then scroll back and forth with different intensities (from slow to fast). 6. Conclusions I closed the app to force loading from flash drive cache (where • writing an iOS image caching component from scratch is available), then ran the same scrolling scenario. hard • SDWebImage and AFNetworking are solid projects, Benchmark app – project with many contributors, that are maintained properly. • the demo project source can be found on Github under the FastImageCache is catching up pretty fast. name ImageCachingBenchmark, together with the charts, col• looking at all the data provided above, I think we can all lected data tables and more. agree SDWebImage is the best solution at this time, even if • please note the project from Github had to be modified, as for some projects AFNetworking or FastImageCache might fit well as the image caching libraries, so that we know the cache better. It all depends on the project’s requirements. source of each image loaded. Because I didn’t want to check in the Cocoapods source files (not a good practice) and that the Useful links project code must compile after a clean install of the Cocoapods, https://github.com/rs/SDWebImage the current version of the Github project is slightly different https://github.com/path/FastImageCache from the one I used for the benchmarks. https://github.com/AFNetworking/AFNetworking • if some of you want to rerun the benchmarks, you need https://github.com/tumblr/TMCache to make a similar completionBlock for image loading for all https://github.com/hpique/Haneke libraries, like the default one on SDWebImage that returns the http://bpoplauschi.wordpress.com/2014/03/21/ SDImageCacheType. ios-image-caching-sdwebimage-

Fastest vs slowest device results

Complete benchmark results can be found on the Github project. Since those tables are big, I decided to create charts using the fastest (iPhone 5s) and the slowest device data (iPhone 4). iPhone 5s results Note: disk ~ flash drive (device storage unit)

20

no. 22/April, 2014 | www.todaysoftmag.com

vs-fastimage/ https://github.com/bpoplauschi/ImageCachingBenchmark


management

programming

THE WEB’S SCAFFOLDING TOOL FOR MODERN WEBAPPS – Yeoman

I

nitiating a project can most of the times be boring when it is no longer a challenge. YEOMAN – how can it help us? When starting a new project, in order to enhance productivity and the pleasure of working, Yeoman is based on three tools:

Yo

Răzvan Ciriclia

razvan.ciriclia@betfair.com Software engineer @ Betfair

It helps creating the files structure and it already specifies general configurations for Grunt and Bower.

Grunt • Wouldn’t you find it interesting to know whether the CSS is valid and it will stay like that on a Friday evening, when you are ready to leave the office and you are reading “not working” between the lines of an email from your boss/ client? • Would you like the CSS, JS, HTML to be already optimized at least one day before getting into production? • Have you tested your code without remembering to check the load time? With the dev environment connected to the same network? Have you forgotten that Romania outranks USA, Germany, Norway, Japan and many other developed countries in international tops regarding the speed of connection to the Internet? Grunt will help you optimize the size of your images, without detracting from their quality! • Do you like to structure your work on modules so that everyone has their own CSS or JS files? Do you like to see files that are bigger than 100 lines only in production, where it is necessary to have as few loaded resources as possible? Grunt can

do this for you by compacting CSS or minifying JS. • LESS or SASS – each one of us can choose either one or the other – Grunt knows them both. • The introduction on Grunt is a little short – but some of the qualities of a task runner could only have been presented briefly and to the point.

Bower It saves time, downloading the libraries that are necessary to the new project, as well as its dependencies. The correct usage, installation and running of YEO are conditioned by the installation in advance of Node.JS and Git. Also, generator-webapp must be installed via npm (npm install -g generator-webapp).

YEOMAN installation YEOMAN will be installed just like generator-webapp, by using npm npm install -g yo

PROJECT STARTUP At this point, jQuery, Gruntfile.js and HTML5 Boilerplate are automatically installed and, besides these, you also have the opportunity to include in the recently started application, frameworks such as: Bootstrap, Sass or Modernizr. The entire period of time

www.todaysoftmag.com | no. 22/April, 2014

21


programming THE WEB’S SCAFFOLDING TOOL FOR MODERN WEBAPPS – Yeoman you have to wait in order to get access to the code and to start editing with the pro- yo angular:controller movie ject specifications is approximately two It creates movie.js, the initial version of minutes. a controller in app/scripts/controllers It creates movie.js, the initial version of yo webapp a test in test/specs/controllers At this point, jQuery, Gruntfile.js and angular:directive HTML5 Boilerplate are automatically yo sampleDirective installed and, besides these, you also have the opportunity to include in the recently It creates sampleDirective.js, the inistarted application, frameworks such as: tial version of a directive in app/scripts/ Bootstrap, Sass or Modernizr. The entire directives period of time you have to wait in order to It creates sampleDirective.js, the get access to the code and to start editing version of a directive test in test/specs/ with the project specifications is approxi- directives mately two minutes.

yo angular:view seasons

It creates a view in app/views. For running the project, from the root of the project, you run: grunt serve

If the project was cloned from Git, due to the fact that the files from node_models are added in .gitignore, before running this command, you will also need to run: npm install bower update grunt build

It creates the folder of files for production. In this phase, Grunt runs the tasks yo angular:filter boldRed defined in Gruntfile.js, file that is found in Practical example: It creates boldRed.js, the initial version the root of the project. of a filter in app/scripts/directives An example of an application devenpm install -g generator-angular It creates boldRed.js, the version of a loped with Yeoman can be downloaded It installs the generator for Angular Js filter test in test/specs/directives here https://github.com/razvancg/ based applications yeomanDemo yo angular:app imdbApp

yo angular:service getepisode

It creates getepisode.js, the initial ver- Conclusions It creates the basic structure for the cur- sion of a service in app/scripts/services For someone who has never worked rent application “imdbApp” It creates getepisode.js, the version of a with a code generator before, it may seem service test in test/specs/services difficult to get used to YEOMAN. Given the yo angular:route movies option we have of no longer searching for It creates a new path in the application, yo angular:factory getseasons the latest versions of some frameworks we a view and the associated controller. The It creates getseasons.js, the initial ver- need in the project, of downloading them, result of this command is: sion of a factory in app/scripts/services unzipping and copying them in the project It creates movies.js, the initial version It creates getseasons.js, the version of a location, besides the automatic generation of a controller in app/scripts/controllers factory test in test/specs/services of the file structure and the tasks we can It creates movies.js, the initial version set for Grunt, we can say that YEOMAN is yo angular:provider getmovies of a test in test/specs/controllers “something we have lacked so far”! It creates movies.html - template in It creates getmovies.js, the initial verapp/views sion of a filter app/scripts/services It adds the movies path in the basic It creates getmovies.js, the version of a module app/scripts/app.js filter test in test/specs/services It automatically generates the code for including movies.js in index.html

22

no. 22/April, 2014 | www.todaysoftmag.com


programming

Getting started with OpenXML

I

n this article, we are trying to draw a basic map to programmatically manipulate xlsx files using Office Xml library. Many applications require working with excel files, either for reading and importing data from it, or for exporting data into reports, so it is important to know how to programmatically manipulate excel files.

Florentina Suciu

florentina.suciu@fortech.ro Software engineer @ Fortech

Since 2007, Excel files have completely changed their internal structure. Xls was a proprietary binary file format, whereas xlsx is an Xml Based-format, called Office Open Xml (OOXML).

Excel as zip file An xlsx file is a zip package containing an xml file for each major part of an Excel file (sheets, styles, charts, pivot tables). If you want to check the contents of an xlsx, all you have to do is to change the extension of the file from xlsx to zip and then unarchive it. Gabriel Enache

gabriel.enache@fortech.ro Software engineer @ Fortech

Excel files components

A spreadsheet document contains a central WorkbookPart and separate parts for each worksheet. To create a valid document, you must put together 5 elements, Workbook, WorksheetPart, Worksheet, Sheet, SheetData. The primary task of a WorkbookPart is to keep track of the worksheets, global

settings and the shared components of the Workbook. The document needs to contain at least one Worksheet that is defined inside a WorksheetPart. A worksheet has three main sections: • The Sheet, declared in the Workbook, contains the properties such as name, an id used for sorting the sheets and a relationship id that connects it to the WorksheetPart; • The SheetData containing the actual data; • A part for supporting features such as protection and filtering.

OpenXml library

All the classes needed to manipulate an xlsx file can be found in Open Xml SDK. Here is a simple example of applying a sum on a data column. using (SpreadsheetDocument document = SpreadsheetDocument.Create(path,

SpreadsheetDocumentType.Workbook)) www.todaysoftmag.com | no. 22/April, 2014

23


programming Getting started with OpenXML Id = document.WorkbookPart. GetIdOfPart(worksheetPart), SheetId = (uint)document.WorkbookPart.Workbook. Sheets.Count() + 1, Name = „MyFirstSheet” }); // save the workbook document.WorkbookPart.Workbook.Save(); }

Creating a Pivot Table

Figure 2 - Components of a Spreadsheet Document

{

var workbookPart = document. AddNewPart<WorkbookPart>();

workbookPart.Workbook = new Workbook(); var worksheetPart = document. AddNewPart<WorkbookPart>();

A pivot table is a table used for data summarization, that can automatically sort, count or apply average on the data stored in a data table. A pivot table needs a source data table. We will assume that we already have the data table, in a sheet called “DataSheet”. A pivot table has 4 main parts: WorksheetPart, PivotTablePart, PivotTableCacheDefinitionPart and PivotCacheRecordsPart. Also, we need to instantiate a list of PivotCaches, with one PivotCache child. In the following images, you can see the “map” of a pivot table.

// create sheet data var sheetData = worksheetPart.Worksheet. AppendChild(new SheetData()); // create a row and add a data to it sheetData.AppendChild(new Row(new Cell() { CellValue = new CellValue(„5”), DataType = CellValues.Number })); sheetData.AppendChild(new Row(new Cell() { CellValue = new CellValue(„3”), DataType = CellValues.Number })); sheetData.AppendChild(new Row(new Cell() { CellValue = new CellValue(„65”), DataType = CellValues.Number })); sheetData.AppendChild(new Row(new Cell() { CellFormula = new CellFormula(„=SUM(A1:A3)”), DataType = CellValues.Number })); // save the worksheet worksheetPart.Worksheet.Save(); // create the sheet properties var sheetsCount = document.WorkbookPart.Workbook. Sheets.Count() + 100; {

document.WorkbookPart.Workbook.Sheets. AppendChild(new Sheet()

Young spirit Mature organization A shared vision Join our journey! www.fortech.ro

24

no. 22/April, 2014 | www.todaysoftmag.com

Figure 4 - Components needed for creating a Pivot Table

var pivotWorksheetPart = document.WorkbookPart. AddNewPart<WorksheetPart>(); pivotWorksheetPart.Worksheet = new Worksheet(); var pivotTablePart = pivotWorksheetPart. AddNewPart<PivotTablePart>(); var pivotTableCacheDefinitionPart = pivotTablePart. AddNewPart<PivotTableCacheDefinitionPart>(); document.WorkbookPart.AddPart( pivotTableCacheDefinitionPart); var pivotTableCacheRecordsPart = pivotTableCacheDefinitionPart. AddNewPart<PivotTableCacheRecordsPart>();


TODAY SOFTWARE MAGAZINE var pivotCaches = new PivotCaches(); pivotCaches.AppendChild(new PivotCache() { CacheId = pivotCacheId, Id = document.WorkbookPart. GetIdOfPart(pivotTableCacheDefinitionPart) }); document.WorkbookPart.Workbook. AppendChild(pivotCaches);

The PivotTablePart describes the layout. Its child, the PivotTableDefinition stores the location of the table and the PivotFields. There are two kinds of PivotFields: RowFields and DataFields. • RowFields are static data and their corresponding PivotField has the Axis property set to “AxisRow”; • DataFields are calculated data (like totals) and their corresponding PivotField has the property DataField set to true. The pivot table definition needs also to know the id of the PivotCache we defined above. In the pivot table definition you can also specify the format in which you want to display the table. These can be: Compact (set compact flag to true), Outline (set the Outline flag to true), or the Tabular format (set the GridDropZones flag to true). The PivotTableCacheDefinitionPart with its child PivotCacheDefinition, defines the cache fields. We need to declare a cache field for each column in the table. It also contains the cache source type (as SourceValues.Worksheet) and the worksheet source. The PivotCacheRecordsPart needs only to be defined and appended, this part being automatically populated with the cached values of the table.

var pivotWorksheetPart = document.WorkbookPart.AddNewPart<WorksheetPart>(); pivotWorksheetPart.Worksheet = new Worksheet(); var pivotTablePart = pivotWorksheetPart. AddNewPart<PivotTablePart>(); var pivotTableCacheDefinitionPart = pivotTablePart. AddNewPart<PivotTableCacheDefinitionPart>(); document.WorkbookPart. AddPart(pivotTableCacheDefinitionPart); var pivotTableCacheRecordsPart = pivotTableCacheDefinitionPart. AddNewPart<PivotTableCacheRecordsPart>(); var pivotCaches = new PivotCaches(); pivotCaches.AppendChild(new PivotCache() { CacheId = pivotCacheId, Id = document.WorkbookPart. GetIdOfPart(pivotTableCacheDefinitionPart) }); document.WorkbookPart.Workbook. AppendChild(pivotCaches);

Conclusion

In this article we draw a basic “map” of how to navigate through OpenXML in generating xlsx files. Even when trying to present it as easy as possible, you can see that the code for even the most simple operations can and will get complex.

Applying Conditional Formatting

Now, let’s see how to apply some conditional formatting on the data, that is to format and highlight some cells based on their values. In order to do that, you need to define two things. First, you need to define the styles that you want to apply to the highlighted cells, mainly the fonts and colors. The styles are declared in the Stylesheet of the workbook part. Next, you need to define the rules with the help of the ConditionalFormatting object that has as a child a ConditionalFormattingRule object. Below you can see an example, where we apply a conditional formatting for the cells having a value less than 3.

www.todaysoftmag.com | no. 22/April, 2014

25


programming

BDD, Javascript and Jasmine

I

n this article I will try to build upon the concept of Behavior Driven Development(BDD), using the JavaScript testing framework Jasmine. As we already know, JavaScript had a long way down the road, becoming from a simple scripting language, for the world wide web, a full stack development language.

Because of this reason it happens, that we have, sometimes This short and concise description can be divided into more undesired, a migration of the business logic from back-end to sections, which I am about to describe. front-end. This adds a new level of complexity on our client-side layer, therefore this layer will have more responsibilities. What I The “loop” of BDD want to address here, is that once we have more responsibilities on 1. Write a failing “Acceptance Test” that acts as the high-level client-side, the project›s maintenance costs are directly impacted. specification, can be interpreted as an acceptance test. It had been demonstrated scientifically that 75% of the project›s 2. Write a failing Unit Test ( usually identified by the “RED” road-map is the maintenance concern, and 25% stands for the step in the loop ) actual development [1]. Therefore, besides the performance and 3. Make the failing unit test pass (via the simplest possible scalability factors of our application, we should address another solution, returning a “duplicated” constant if that suits your one, the maintenance concern. BDD helps us building a decouneeds, usually identified by the “GREEN” step within the loop) pled, robust and easily adaptable-for-future-changes system. 4. Refactor ( to remove duplication of knowledge, incremental design is part of this step )

Different understandings.

Practically if we ask 10 developers to explain BDD they will come up with 10 different answers. Some might say BDD is just TDD, but done well, some might say BDD is a second generation of Agile methodology, others might say BDD is an elegant technique for expressing executable specifications, and the list might go on. Because BDD comes as an augmentation of TDD, let›s say some words about the traditional methodology TDD. Programmers which do TDD have reached the conclusion that the only things in common TDD has with tests, is the keyword “test”, and nothing more. It might sound weird right now, but stay on track, because I plan to scatter the fog. The basic steps any developer discovers when it does TDD are the following: 1. In the first step, any developer starts by writing code and then, cover around the code with some unit tests, using some existing testing-framework. 2. In the second step, after some practice, one found some benefits of writing test-first code, and gain insight of some Testing Patterns, like Arrange Act Assert (AAA), Test Doubles [2] (Mocks, Stubs, Spies). 3. Now the developer finds that TDD can be used as a design technique for building neat abstractions, and decoupled code. 4. In the last step he discovers that TDD has nothing in common with writing automated tests

BDD inherits from TDD the “pull-based” rule.

Over time, probably any of us have been constrained by the downsides of the push-based methodology. In short the “pushbased” describes those primitive times, when the manager spread some tasks over a bunch of developers, saying: “You should finish this task by the end of the week”. We couldn›t unleash our skills, or willing to improve ourselves on a specific domain, because we were constrained to solve those handed tasks, therefore we couldn›t gain any new experiences over other variety of technologies. In BDD we have a backlog, a queue in which tasks are pushed. It is something like the Producer-Consumer Pattern, where every developer acts as a consumer, consuming/resolving some pulled tasks, and the stakeholder acts as the Producer, which pushes new features on the stack. This approach improves the communication between the developer and the stakeholder on one side, because any incoming requirement is a high level feature, exposed from the stakeholder point of view, deprived from any technical detail. Afterwards, the developer divides this feature into sub-tasks, which are then prioritized by business value. This approach eliminates the possibility of implementing something that is not part of the stakeholder requirement, and in the same time, an ubiquitous language is born.

BDD is TDD at the Core

As known, writing the test-first-code we ask ourselves, from the beginning, what is exactly that we need to make this failing test pass. By writing tests first, we will write just enough code to make our failing test pass, nothing more. In this context by doing TDD Dan North (the founder of BDD) described BDD to be a we avoid doing Big Design Up Front (BDUF). We will be the first methodology that helps us implementing the application by users of our API. As Kent Beck said in his famous book “TDD By describing its behavior from the perspective of its stakeholders. Example” [3], we should place our thoughts into how we would like to interact with our API, and we start writing the test based I am about to describe the last point in detail. Sadly most developers cannot step over the second point.

26

no. 22/April, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE on this assertion. Writing the production code first, and then the covering tests, most of the time, we will not focus on designing our production code to be testable from the start. Following this course we will end up coding something fragile, tightly coupled with a lot of dependencies, hence immobile and not reusable. If the design is not testable, that means it is hard to be verified, and the loop ends having an untestable code, therefore unreliable. How many times has it happened that we change something in this so called “isolated” part, and break other dependent components? I must say it has happened to me... In the end, tests should be considered as a safety net under any serious refactoring, and any change in business logic. It is important to notice that we should test the business logic, that is the behavior, and not trivial operations such as getters and setters, or other third party APIs which come bundled with their tests.

Higher expressivity

Now the difference between TDD and BDD, is that TDD tells us we have a test involved, while BDD, tells us we have a more meaningful word behavior. First thoughts are that a test is supposed to assert something that is true or false, while behavior describes a more meaningful interpretation of our domain logic. In BDD the form of the test is replaced by a specification. What both TDD and BDD share in common is that we end up having executable specifications, which will serve in the future as living documentation. I want to explain a little bit about the

words: “living documentation”. Most surely it happened to us to have a static specification (high level specification), in the form of a document or user story, which describes the features of our application. Now this specification combined with the developer›s specifications exposed through testable scenarios violate the Don›t Repeat Yourself (DRY) principle in its pure sense, because there is a duplicated knowledge in there. When something changes, the application behavior needs to be updated, while the text-documentation might not change, becoming deprecated. We should keep them in sync somehow. This introduces a new overhead in development. BDD focuses on transporting this highlevel specification into testable code, which proves more useful by verifying the sanity of our application. This executable specification goes along with our production code.

Outside In Development

If there is an Outside In form of development, the intuition tells us, it should also be an Inside Out form as well, which I think all of us followed at some time. Let’s start with the latest one. Doing Inside Out we start and implement some operations or functions, which we consider a core part of the requirement, and we start building upon them, adding others. It is easy to build a premature abstraction, which is simply infected and wrong. Inside Out makes one thing hard and it makes it well. As the business logic becomes more complex, it will be hard to find the right-path that will guide us to implement the complex feature.

Often this form of development pushes us to develop code that will not be reusable, and obsolete. This leads us to wasted time, and money, to develop something that ends up not being used. Extreme Programming emerged this into a principle of its own, called You Ain›t Gonna Need It (YAGNI) [4]. On the other hand, the Outside In form of development shares nothing with the former described method. In Outside In we start coding from a high-level spec, which happens to have business value. Starting with a high-level specification, we code our way in, breaking the spec into more cohesive specifications, hence implementing just as much as it needs, and nothing more. Coding in this form, we end up forced to behave like we already have some operations, which are not fully implemented. This can be one of the downsides of this development form, because we will not be able to run the tests against the functionality, until all the implementation is finished. This on one side beats the purpose of BDD/ TDD, which states that we should run the tests as often as possible, so that we can catch early bugs. In the same context, implementing the full functionality in one step is not considered to be quite a baby-step. The scope is to derive low level specs from high-level specifications. We can name the low-level specifications inner-circle, while the highlevel specifications loop inside outer-circle. The principle that is at the heart of this methodology is “divide et impera” [5]. It is far easier to solve a complex problem, by breaking it into a list of small and cohesive Our core competencies include:

Product Strategy

Product Development

Product Support

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

www.todaysoftmag.com | no. 22/April, 2014

27


programming BDD, Javascript and Jasmine problems. These small solutions can compose the final result for structuring our code does not imply that domain experts will our initial complex problem. understand our code. On the other hand, GHERKIN improves the communication Implementing in a BDD manner, we might map this process between the stakeholder and the developer, because it helps builas “One-To-Many” relationship. The “One” is the current ding an ubiquitous language between both worlds. subject under test (SUT), while the “Many” relation stands for GHERKIN reveals a cleanest way of expressing the specificaits dependents, the SUT›s collaborators. SUT should behave as tion. The transported keywords are: it already has its dependents in place, and not bother knowing 1. GIVEN( an execution context ), about their implementation details. Even if its collaborators are 2. WHEN( this operation is invoked ), not fully implemented, this postponeing will help gaining a better 3. THEN( we should get this expected result ). knowledge of the domain of SUT. Relying upon abstractions is always a good technique. We can map this into a real requirement very easily, which can be interpreted by a non-programmer as well: “GIVEN Organizing Code in BDD the employees from a department, WHEN payday comes and The structure of our executable specifications can be done by: computeSalary() is invoked, THEN I want to receive a salary 1. Feature report for all employees”. 2. Fixture This information seems more useful, than having a test 3. Operation/Method asserting something is true or false. It increases the expressiveness by using plain natural language to declare a real requirement. This By Feature, it means the specifications are grouped by a topic is the business value that GHERKIN provides. of the requirement. One theoretical scenario might be to calculate Let›s reiterate some of the advantages of BDD over the the total items in a shopping cart (for e.g. an online shop). In this traditional TDD: feature we might have several routines that communicate, sending 1. Improved communication between the developer and messages to each other: the stakeholder. 2. Takes the developer mindset closer to the business value, var shoppingCart = getShoppingCart(user) var totalAmount = ItemCalculator.calculateTotal(shoppingCart) forcing him to think in behaviors. var totalAmountWithVAT = Taxes.applyVAT(totalAmount) 3. Provides a more understandable pattern that acts as a We can clearly see that we have some operations that send business language. messages to each other, helping calculate the total price of items 4. Easier for non-programmer to understand a natural lanin the cart. This can be mapped into specifications as one feature. guage, than reading long method names. Grouping specifications by feature, yields easily in spaghetti 5. Easier to understand the behavior of the application by code, when the feature changes over time, and needs to be updareading the specifications. ted, or new functionalities augment the initial feature. The scope is to keep our specifications as clean as possible. Specifications shoThe Second Part of my article has as main actor the Jasmine uld have the same value as the production code. [8] testing-framework. The initial scope was to build an EventBus On the other hand, structuring specifications by Fixture, library, developed following BDD methodology, using Jasmine as means we end up with several “execution-contexts”. We can the “testing” framework, however the article has enlarged itself have one execution-context when the shopping cart is empty, we pretty much, therefore I prefer to describe some core parts of it, can have another execution-context when the cart reached the and let you check the code on github [9]. maximum allowed amount. This approach of structuring the speA few words about Jasmine. Jasmine is a unit testing fracifications drives to a clean and elegant design, with the operations mework, and not an integration framework, as some might grouped by the context in which they are executed. think. It allows structuring the code by Fixture, and allows nested The last one stands for Organizing the Specifications by “describe() blocks”. It also provides some common used routines Method. This approach is tedious, and can easily drive, as well, to for setup and teardown the execution context. It uses spies as the a spaghetti code. Why is that? Because taken an operation “foo()”, Test Double Pattern, and can be easily integrated in a continuous and “bar()”, we have some tests written for “foo()”. Internally integration environment. “foo()” might use “bar()”, for which we already have some veridescribe(“EventBus – SUT”, function () { fications done. Therefore our specifications will seamlessly // various fixtures it(“should do this and that – describe behavior”, function() { become redundant and other programmers might not take our // some executions expect(thisResult).toBeTruthy(); “bullet-proof ” tests seriously, therefore our tests would become }); }); deprecated, and obsolete in time. BDD reveals the intention, through a well-known business lanOur Subject Under Test (SUT) here is the EventBus. As the guage, which is a simple DSL, called GHERKIN [6]. This business name suggests, it is a data-bus, which is responsible for managing language appeared for the first time in 2008 in another behavioral events and listeners. driven framework, called Cucumber [7]. It’s a business language I will not start another discussion about its benefits, but I because it does not point specifically to programmers, but it can be prefer to say it drives to a loosely communication between collaeasily interpreted and understood by a non-programmer. borating objects. Its ancestor is the well-known Observer Design As we might know, TDD has its own Structuring-Pattern to Pattern. write tests. This is also known as the Arrange Act Assert Pattern. Instead of directly managing the dependencies, we deleThese instructions help us structure our code within the body gate this responsibility to the EventBus, which knows about the of the test, making it more readable and maintainable. Still, events and the handlers. The good part is that none of the objects

28

no. 22/April, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE triggering the events will know something about their handlers. Events are simply published in this EventBus, which have or may not have corresponding registered handlers. When the EventBus fires an event, it will be handled by one of the handlers that is registered to listen to particular events. As known jQuery lets us manage events on DOM level, however the EventBus lets us manage application-related-behavioral events. A practical scenario where the EventBus proves to be useful is a well-known by now online-shop. When the user clicks the buy button, we might want that many operations be triggered behind the scenes, so that for eg. a shopping cart changes its list of items, a pop up is displayed, and the processed item disappears from the initial list of items. The implementation might imply registering three events for each of the required action. One or several listeners can be registered to listen for these events. When the buy button is issued, the EventBus fires these events, delegating the handle responsibility to registered listeners. However, for the sake of simplicity, I prefer expressing a shorter example, and inspect that some events are indeed handled by one registered listener. The following snippet of code is straight-forward: describe(“EventBus”, function () { var openConnectionEvent, sendStreamEvent;

partial constraint, because we will not be able to run the highlevel specification until all the inner-circle is fully implemented. Some frameworks provide a “cross-this-specification” mechanism (likewise Jasmine) that allows to bypass the high-level implementation until all the low-level specifications are done implemented. In the same time, it forces us to depend and think upon abstractions, which is a good approach in an OOD world. That way we postpone the real implementation until we have enough insight of the business problem. The good part is that we can do BDD in a dynamic language like JavaScript as well, starting from the low-level specifications. Probably if we do TDD well, we are already doing BDD. [1 - http://www.clarityincode.com/software-maintenance ] [2 - Test Doubles: http://xunitpatterns.com/ ] [3 - Test-driven Development: By Example, Kent Beck signature book The Addison-Wesley signature series ] [4 - http://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it ] [5 - http://ro.wikipedia.org/wiki/Divide_et_impera ] [6 - https://github.com/cucumber/cucumber/wiki/Gherkin ] [7 - https://github.com/cucumber/cucumber/wiki ] [8 - http://jasmine.github.io/ ] [9 - https://github.com/cclaudiu81/EventBus ]

beforeEach(function () { openConnectionEvent = “openConnectionEvent”; sendStreamEvent = “sendStreamEvent”; ); describe(“having a collection of event/listeners – fixture”, function () { beforeEach(function () { EventBus.registerEvent(sendStreamEvent, openConnectionListener); EventBus.registerEvent(sendStreamEvent, sendStreamListener); }); afterEach(function () { EventBus.cleanBus(); }); describe(“#fireEvents – operation”, function () { it(“should trigger all registered listeners to handle the operation”, function () { spyOn(console, ‘log’).andCallThrough(); EventBus.fireEvent(sendStreamEvent); expect(console.log).toHaveBeenCalled(); // ... other related expectations }); }); }); }) ;

This specification pretty much shows what Jasmine is capable of. It provides the setup and the teardown mechanisms for “arranging” the context and some powerful spies as the Test Double Pattern implementation. Spies let us inspect what methods have been called and, optional, with what arguments. We can also instruct the calls to return specific results and we can also check if a method was invoked via the real implementation. This is the purpose of the andCallThrough() method. Specifications integration in a Continuous Integration environment is a trivial task. Following the “describe” blocks, we can easily understand what the behavior of the feature reflects.

Summary

BDD comes as an augmentation over the traditional TDD. BDD drives the application design to a more loosely coupled architecture. Refactoring phase introduces the incremental design. Outside IN form of development is at the heart of BDD. It states that we should start from a high-level specification, and go down the road splitting it into more cohesive unities. This way we ensure that only what’s in the requirements get implemented and we avoid doing a premature BDUF. Outside In introduces a

Claudiu Cosar

claudiu.cosar@3pillarglobal.com Software engineer @ 3Pillar Global

www.todaysoftmag.com | no. 22/April, 2014

29


management

Why does it take you so long to finish a task?

S

olving a task or reading an article from a magazine takes usually no more than a few minutes. Meanwhile, the chances to be disturbed are very high – either to check your phone, your email or your Facebook notifications. Additionally, if you are sitting in your office, your colleagues’ discussions about cars, football or fashion, might capture your attention. Even if you are trying to keep focus, you cannot refuse your colleague who kindly asks you for help. And this is how your focus is definitely lost. With all of these happening, your boss is still wondering why your task isn’t finished yet.

Cruel reality

Lately, finishing a task is taking more and more time because of the surrounding distractions. It is said that while you are at the office you might get interrupted every 3 minutes, either by humans or by your high-tech devices. Once interrupted, it might take you up to 20 minutes to get back to your focus state. That’s the moment when you realize you were distracted and you try to compensate by working faster, but this comes with a price: more stress, frustration, working under pressure and a lot more effort. Distractions in the office have been there ever since. Contrary, digital evolution had a huge impact on our productivity but nowadays, it seems that technology has only one main goal: to ruin our focus. Because of the smart devices, the large number of web application, eCommerce and the latest technologies, it is very hard to keep focus only on our tasks. Often, we start our day with a to-do list and in the evening we find out that we still have some unresolved issues. So, whatever those tasks

30

on your to-do list are about, it is good to keep them as short as possible in order to have more things accomplished at the end of the day.

Productivity

But, what does productivity actually mean? We could measure it exactly by the number of units finished within a certain amount of time, for example: the lines of code written per hour. However, this computation does not reflect the real productivity which should be measured against the results, something like: our customers have received their product, our business is growing, we have learnt something new today etc. Moreover, productivity means also the way in which we are organizing our time and ourselves so that we accomplish more in less time. In other words, we have to maximize the work done, while minimizing the effort spent.

Self-discipline

To be truly productive, it is required a lot of discipline. But what if your brain is

no. 22/April, 2014 | www.todaysoftmag.com

the one who isn’t disciplined? Most of the times, we use our brain to extract information we already know, because we had learned it, read it or heard it somewhere. The hardest part comes in when we have to solve a problem and we are forced to think: this particular case seems easy at the beginning, but it is very hard for the brain. Then, our brain pushes us to take a small break and look into some other way, hoping that, when we return, the problem would have been solved. Obviously, this won’t happen, because nobody is going to solve our problems. And we find ourselves checking emails, mobile phones for notifications, reading some blogs and articles and this is how we get lost into the pool of the latest technologies to avoid the only one thing which we were created for: to think. So we ended up in the so called multitasking. It is the number 1 enemy of the productivity and there are plenty of reasons why. One of them is pretty clear: our brain has a limited capacity for attention and that one should be focused on only one task at a moment. You cannot have maximum


TODAY SOFTWARE MAGAZINE concentration on a task if you are trying to be used for doing sports or planning the than thinking that you have an infinite answer your emails or to your boss at the rest of the day. So, early mornings might be time and you’ll never know actually when same time. the key to a productive day. the task is going to finish. If you choose the second option, after two days of working Todo or NotTodo on the same task, you would definitely want A useful tip that a lot of people seem to to switch to something else. Also, by setting use is the todo list. This list does not have a slot of time you can manage your work to follow a certain pattern; it might contain better: you won’t force yourself to “write a tasks that need immediate solving, things whole novel” in those 2 hours, you’ll only that you should remember later (for exam- write a draft for your first chapter. ple: answer to an email by a certain date) One example of timebox is the or things that you would like to do, but Pomodoro technique. It was invented by which are not urgent. Even if some people the Italian Francesco Cirillo in the `80s We can avoid multi-tasking by using do not consider it important, the todo list because he felt frustrated about not being the flow psychology. A state of flow is that has a major role in our life: we have a lot of capable to maintain his focus for a longer particular moment – which we all had at thoughts, pressures, ideas that our brain is period of time. The technique is very simleast once – when you feel absorbed by the forced to remember. The fact that there is ple: you pick a task and using a kitchen problem, by your task and you feel that no a list where these things are written down timer you perform that task for 25 minutes. one and nobody can disturb you. We sho- is a win-win for us and also for our brain. Then, a break for 5 minutes comes in. This uld get the most out of those moments. In the particular case of programmers, is called a “pomodoro”. You may perform The least pleasant part is that those flow the todo list usually contains tasks or goals multiple pomodoros consecutively 2. states cannot be planned ahead and a night that we want to achieve as soon as possible. with less sleep can diminish the chances to Defining a goal before starting the actual Best practices achieve your flow. The state of flow is also work will take us to better results in a more Not only, but especially in IT, we hear a influenced by the perceived challenges of efficient way. Sometimes, we head ourselves lot about best practices. The name is pretty the task on one hand and the perceived into the problem solving without knowing much obvious, but each domain or proskills of each individual on the other hand. what we really want to achieve and on our gramming language has its own rules or But let’s not get deep into psychology and way we notice that we have slightly devia- practices. As Will Durant says in The Story summarize that we have to make use of ted from our first idea or we figure out that of Philosophy (even if some journalists those moments, moments in which we feel we want something different. This is why pretend this is Aristotle’s saying):„We are creative and more productive. it is very important to clearly define your what we repeatedly do. Excellence, then, is This article presents activities and goals and specifically break down your not an act but a habit”. So, the pool of best situations of day to day life, pointing out daily tasks. practices and lessons learned will grow by problems but also improvements that can On the other hand, some journalists time. Therefore, establishing new habits be made to increase productivity. Giving suggest that a list of NotTodos might be and good practices will lead us to a better that every human is different and has his/ even more useful in time management and quality of our work. her own way of working, the ideas below productivity increase, than the todo list. A well-known way of establishing might not be useful for everybody. Summarizing our daily activities, we can new habits is the Seinfeld Calendar. This determine which ones are worth it or not, method was created by comedian Jerry 25h/day based on time and effort spent and come Seinfeld at the beginning of his career. In As I said before, productivity refers up with our own NotTodo list. Finally, that time, he had to write a lot in order to also to the way in which we organize our everyone is free to decide if they want to enrich his jokes vocabulary. His technique day, work and thoughts. There are a lot of use a list or not and if yes, which one is the is very simple: you pick a task/an activity articles writing about what are the most most suitable for them. and you mark in a calendar the day in useful habits that successful people follow which that task was performed. The key is to increase their productivity day by day. Time management to fulfill your task every day and as the days Reading those articles you end up thinking Another term used a lot in producti- go by you will form a chain in the calendar. that for some people the day has more than vity related articles is timebox. Timebox is Do not break the chain! By going on with 24 hours and you start wondering how a time management technique that is more your activity, you’ll get used to it and it will come others can achieve so many things and more used not only in programmers’ become a habit. This technique works in while you are still struggling through the way of working, but also in other areas. every domain or area because daily, repelittle time you have per day. Well, there This technique is very useful because it titive actions lead to establishment of new are some tricks that they used and claim allows you to define slots of time for your habits. Also, this gamification raises comthat those tips had improved their way of activities in such way that you can work on petitiveness, achievement and fulfillment living. Most of them wake up early and use different tasks on the same day. In this way into our daily life. the morning hours to deal with the impor- you are protected from multi-tasking becatant problems, before the others’ activities use you know you have reserved some time Decisions? interfere with their priorities. Moreover, it for other activities. A specific case to empA newly concept, based on our way of is said that usually, in the morning, people hasize timeboxing is starting a new task: it making decisions was introduces by New tend to be more optimistic and more open is very easy to start working knowing that York Times journalists as the “decision fatito new challenges. Also, morning hours can you have 2 hours allocated for that task, gue” 3. The main idea is that our decisions www.todaysoftmag.com | no. 22/April, 2014

31


management Why does it take you so long to finish a task? tend to get worse as day goes by. With each and every single decision made during the day, our brain gets exhausted and at the end of the day it has hard to make good decisions. So, in the evenings, when our energy level is low, it is likely to act by chance or a better way would be to avoid making any decision. Usually, we don’t realize this mental fatigue and we notice it only after a bad decision has been made. This is why we should schedule our decisions for the morning hours. A great example that a lot of successful people embrace, probably having some other reasons too not only easing their decisions, is adopting a wardrobe as simple as possible. For example, Mark Zuckerberg has a lot of grey t-shirts, actually same t-shirt, multiple items; Barack Obama has only 2 types of costumes: grey and blue and he alternates them; also, Steve Jobs wore same clothes every day. So, this is how they eliminate even the minimal effort of deciding what to wear each morning, decision which might take up to several minutes, for some of us. Also, decisions like what to eat today or should I go to the gym or to the swimming pool might eat up our energy without even noticing.

80/20

Knowing that „Life isn’t fair”, the Pareto law or the 80/20 law states that a lot of things from our lives are not equally distributed. The law’s statement says that 20% of the input creates 80% of the results. If we switch the parts, it would be like the 80% of the inputs create 20% of the results. The second one sounds a lot more productive, because it’s about more input, more effort. That is why some people tend to follow this rule and they invest time and effort in something that is not even necessary 4. Even if we prefer our life to look like the red line from the plot, most of things follow an unequal distribution. Fortunately, this one has a great advantage: there are moments when you want to develop a new product for your client, but the client doesn’t have the view of the entire product, he has just an idea about what the product would look like. So, as developers, it would be ideally to invest as little effort as possible (20%) and come up with a prototype or two for the customer, instead of spending a lot of effort in only one direction and following that single idea. Finally, our prototype might end up into the final product and what’s next is customization, additional

32

working alone, pair programming boosts productivity in the long term and also the code quality. Even if on short term, the individual time and the productivity might be affected. There are other gains brought up by pair programming: codebase knowledge, design and architectural knowledge, feature and problem domain knowledge, language knowledge, development platform knowledge, framework and tool knowledge. In other words, the person you are working with might have different ideas and a lot more knowledge and by learning and sharing the skillfulness spreads within the team. Finally, we can pass (and we should pass) some of our work to the machines, meaning automation. Starting from small commands that are frequently used, up to big application that can generate code on our behalf (21st issue of Today Software Magazine: “How to make the machine write the boring code”, Denes Botond).

features and fine tuning. If we had invested a lot of effort only in one direction, which might have been proved to be the wrong way, we would have lost 80% of the effort and obtaining only 20% of results. So, the same amount of input (effort, time, man hours) does not contribute equally to the results. Therefore, while assigning time, resources and effort, the optimal solution is the one that takes 20% inputs and gives 80% Conclusions of the results. There are some patterns that ruin our productivity. Among them, multi-tasking Axioms and interruptions caused by human or Many of the things mentioned above digital distraction are the well-known. may or may be not applied in your daily These and also the other unhealthy habits life, based on each of your principles. The can be diminished by self-discipline. And if next ideas instead, are more like axioms; Facebook or emails are more powerful than they are valid and they don’t need any your will, there are tools which will tempoproof. rary block your access to certain web sites. An obvious way of doing more in less Less strict tools just monitor your time time is by using shortcuts or short keys. spent surfing on web sites or using appliIt doesn’t matter that they are used for cations. This type of tools might be useful the operating system, for your preferable for your personal tracking; you can monieditor, browse or IDE, short keys lead to tor, identify and correct the habits that ruin a better way of working. There are lots of your focus and productivity. combinations for short keys and a good way of memorizing as many as possible is References by learning them gradually: 1 or 2 short- [ 1 ] h t t p : / / w w w . c a r t o o n a d a y . c o m / cuts per day. Also, a cheat sheet on your the-role-of-smartphones-in-business-productivity/ desktop might be useful for revising them. [2]http://blog.clarity.fm/25-minutes-is-allTalking about shortcuts, we can add you-need-how-the-pomodoro-technique-increashell aliases here also. Aliases can be cre- ses-productivity/ ated for long commands, for replacing a [3]http://www.nytimes.com/2011/08/21/magazine/ command with a personalized version of do-you-suffer-from-decision-fatigue.html?_r=3& it, for replacing a long path with a shor- [4]http://betterexplained.com/articles/understanter name etc. For example, if you want to ding-the-pareto-principle-the-8020-rule/ run your remove command (rm) in a way Calvin Correli, Productivity for Programmers, in which it requests confirmation before http://pluralsight.com/ attempting to remove the file, you will create an alias for rm command to be: rm –i. Another useful example is when you Gabriela Filipoiu want to replace a long and complicated gabriela.filipoiu@accenture.com command of the git versioning system. Another idea, which personally I Software Engineering Analyst believe is very productive, is pair program@ Accenture ming. Besides the fact that you are not

no. 22/April, 2014 | www.todaysoftmag.com


management

Requirements Engineering using the Lean methodology

N

owadays, due to the advance of the technology and easy access to information, almost anybody has the opportunity to build virtually any software product or service and address it to the global market. Despite its relatively short history, the software industry has flourished attracting a huge interest and effort into developing, testing, marketing, and selling its products, amongst many other related activities. Radu Orghidan

radu.orghidan@isdc.eu Requirements engineer @ ISDC

The Requirements Engineering (RE) [1,2] is a crucial part of the software development process. It has been proven, both in the literature [3,4,5] and from our experience in ISDC, that functional misunderstandings or inaccurate business activity models imply far smaller costs when discovered during the requirements development or the grooming stages than when their effects appear after the software goes into production. Moreover, the largest chunk of a software development budget is absorbed by the maintenance costs which are tightly influenced by the quality of the initial requirements. The Product Owner (PO) is one of the stakeholders that use the RE community and their process for describing, documenting and eventually building the product. For a PO, the end users are the customers and the RE is one of the tools used to satisfy the customers’ needs. By changing the reference coordinate system, the RE process can also be seen as the product that the software companies, through their REs, are offering to the corporate stakeholders which, consequently, become customers. The Lean methodology was first used by Toyota in their production system. It is focused on producing only the features valued by the customer, thus, increasing quality and reducing development time and costs. In his book, The Lean Startup, Eric Ries [6] defined the startups as “a human institution designed to deliver a new product or service under conditions of extreme uncertainty.” Therefore, startups imply human interaction in a structured way. In the Requirements Process developed at ISDC, this approach is similar to the requirements elicitation and

gathering. For these steps, our RE community uses various well defined investigation techniques, such as interviews, workshops, observation, prototyping etc. in order to extract customer needs. Moreover, the second part of the definition points out that a startup delivers a product or service without knowing exactly how it will suit the needs of the end users. Again, this is very similar to the service offered by our REs that has to develop a set of requirements [7,8,9] that will have to fit the needs of our customers while describing the product as accurately as possible. Starting from the observation that the RE department in a software company is similar to a startup organization, we will analyze the idea of extrapolating the Lean principles developed for startups. The objective of this paper is two-fold: first, we aim to present in a new light the traditional RE process and warp it as a product that the RE department is “selling” to the project’s stakeholders; second, we would like to investigate how the Lean methodology can be applied to the RE activity. Interestingly enough, the Lean methodology can be simultaneously applied to both parties involved in the RE activity: the customer for defining the actions that the startup has to take in order to build a successful product, and the RE community for defining the actions that have to take in order to satisfy the customer’s needs and obtain a software product with the desired functionality.

Learn, Measure and Build

Software products can be developed using different methodologies such as waterfall,

www.todaysoftmag.com | no. 22/April, 2014

33


management Requirements Engineering using the Lean methodology iterative or agile. Independently of the chosen approach, the RE is a mandatory step that precedes any development activity. Usually, in our company the requirements phase is followed by the development and finally by the release of the product. Thus, the learning phase of one of our customers launching a novel product is divided between the requirements stage and the release, with a higher share after the release, when the market acceptation can be measured. Figure 1 depicts a typical production flow that starts at the requirements stage followed by the actual development and quality assurance and concludes by the release.

Figure 2. Creating a Minimum Viable Product enables the startups to suit the real needs of the customers. Courtesy of Lean Entrepreneur (http://LeanEntrepreneur.com/)

Figure 1. Learning amount in a typical pro-

Figure 3. Diagram for the RE process in ISDC.

duction flow that starts at the requirements stage and closes after the release.

During the first stage, we must go through a learning process while adapting our customer’s requirements to the activity field and technical constraints. However, at the beginning, there are very few clues about how the product will impact the end users and how they will receive and understand it. When the developed products include an important innovative component it is very difficult to predict the feedback they will receive once on the market. Whilst the next two stages after the RE focus on the actual development process, the highest amount of learning will be harvested after the product is launched. The Lean methodology reverses the traditional industrial production process formed by the “Build, Measure, and Learn” sequence. Instead, the Lean theory defends the idea of creating a Minimum Viable Product which is used for learning what the customers really want and, subsequently, being able to adapt the product in order to suit the real needs of the customers, as illustrated in Figure 2. Focusing on the RE stage and continuing with the analogy between the RE community and a startup, the ISDC RE community tried to understand how to apply the lean methodology by defining the main blocks of the internal RE process. Traditionally, the RE process comprises the following major steps: planning, elicitation and analysis, development, requirements management, requirements communication and auditing. As shown in Figure 3, this process follows the Build, Measure and Learn flow.

34

In order to adapt to the Lean requirements process, the RE should develop the requirements in small iterations and validate the results continuously with the customer. The development of the requirements must be a priority as it brings valuable knowledge to the REs both in terms of the functionality of the product and regarding the way of working of the customer and their expectations for the results of the requirements development process. Applying the Lean methodology, this means that this process should be reversed by trying to learn as much as possible and as early as possible in the RE process. As Steve Jobs said, “it’s not your customer’s job to know what they want”, thus the RE must be able to obtain the correct information by creating a Minimum Value Requirements Product (MVRP) and measuring the customer’s feedback. Trying to identify a concept and then validating it through empirical experiments is also known in the Lean methodology as “Validated Learning”. From our experience, the lack of validated learning leads to frustrations on both sides and delays in the product development.

The Lean Canvas tool

When analyzing and building a startup, the Lean canvas offered by Lean Stack [10] is a valuable tool both. Therefore, the Lean canvas can also be used by the RE community. Considering the Product and the Market, as the two main concerns of a startup, the following issues must be taken into consideration: • Product • Problem: Id e nt i f y t h e problems to solve for the customer,

no. 22/April, 2014 | www.todaysoftmag.com

• Solution: Id e nt i f y t h e solution and validate it experimentally, • Key metrics: Find ways to measure your success, • Costs: Outline fixed and variable costs. • Market • Customer Segments & Early adopters: Distinguish between the customers and the users, Identify the early adopters, • Un i qu e Va lu e Prop o s it i on : Define your offer or Minimum Value Requirements Product (MVRP), • Channels: Define the channels or paths to customers, • Revenue streams: If it is the case, identify ways to produce revenue. One of the most important steps when building a startup is to clearly identify the customer. In the ISDC RE process, this is equivalent to identifying the stakeholders accountable for the product development. In order to identify the right persons, the analysis must consider the appropriate number of candidates. It shouldn’t include too many stakeholders because this will lead to non-manageable set of rules. On the other hand, the group must not be too narrow as the target segment diminish and the input from important customers can be missed. Once the list of stakeholders is clearly formed, the stronger customer segment must be identified. This includes the customers that have some kind of connection with the REs, for instance, that are easier to reach or that can communicate better. At this point is very important to distinguish between customers and users. The rule to apply is that customers pay while the users don’t. The group of early adopters is formed by people that need the product the most. This group is crucial for collecting feedback even before starting to build the product and identify the most valued features. As in any startup, the RE community in ISDC is able to identify the problem to solve for the customer. As a lesson learned from our projects, a good approach is to identify the top 3 problems of the customers using the five whys root cause analysis. It is also very interesting to write down the solutions that the customer is using to overcome these problems and identify his possible alternatives for improving the current situation. The product or the service that the RE is offering doesn’t have to be perfect but must be clearly better than all the existing alternatives. For instance, we use to bring in external domain experts as part of the RE community. This


TODAY SOFTWARE MAGAZINE can bring in a fresh and better view compared to what the customer would get by using internal business analysts that would most certainly be biased by the internal knowledge. Another aspect to evaluate when defining a solution for the customer’s problems is to understand what the offered product is displacing. All products displace something and our sales and RE teams aim at building a clear case upon why the customer should take the risk of making that switch. We are always careful with this aspect because the services offered to our customers will probably replace the tasks of an already existing internal team or will disturb some established flows between the end users and the customer’s IT department. The MVRP, meaning the solution implying the least development effort and offering value to the customer, must be defined. The solution may not be unique and the customer usually has several propositions, including the ones issued from our competitors. In order to attract the customer’s attention, we focus on defining a Unique Value Proposition (UVP). The UVP must be placed at the intersection of the main problem of the customer and the proposed solution. For a maximum impact, the UVP must be stated clearly and should include the result that will be provided to the customer, a specific time interval an address potential objections. For instance, a proposal that our REs presented to a customer sounded as “Clearly defined specifications in x working days with bugs free software guarantee”. The unfair advantage or entry barrier represents a feature that can’t be easily copied or bought and can be used to enforce the UVP. It is also useful at this point to prepare a metaphor that gives the non-technical customers a clear image of the RE service effects or characteristics. When necessary, the channels or paths to customers can be defined. Usually, starting with a few inbound or outbound

channels can be very useful for learning. Defining key metrics is essential for measuring the success of our projects. In order to build appropriate metrics, our RE community identifies what the customer perceives as value and uses it to define success. The fixed and variable costs are also evaluated against the metric for success in order to prioritize them.

Conclusions

This paper was focused on the RE service as it is offered by our company. In ISDC, we see it both as a product in itself and as an entrepreneurial activity. This vision lead to the idea of adapting the Lean principles to the RE in order to improve our overall process. The Lean methodology is focused on using activities that produce value for the customer while reducing the development cycle time, increasing the quality of the product and reducing the costs. One of the key concepts of Lean is the Minimum Viable Product which is the implementation implying the least development effort and offering value to the customer. We show that the requirements can also be shaped as Minimum Viable Requirements Product for our customers. Other contributions of the Lean methodology to the ISDC requirements process are the concept of Validated Learning and the idea of reversing the traditional product building sequence and transforming it into Learn, Measure and Build. Developing requirements in small steps and validating the results with the customers is crucial. By doing so, the RE community accumulates valuable knowledge both in terms of the functionality of the product and the way of working of the customer and their expectations for the results of the requirements development process.

Bibliography „Requirements Engineering in an Agile Environment”, Yunyun Zhu, Department of Information Technology, Uppsala University, 2009 “Best Practices for Requirements Gathering”, Michael Krasowski, Online course at http://pluralsight.com/ “Dependency based Process Model for Impact Analysis: A Requirement Engineering Perspective”, Chetna Gupta, Yogesh Singh, Durg Singh Chauhan, International Journal of Computer Applications (0975 – 8887), Volume 6– No.6, September 2010. “Impact Analysis of Goal-Oriented Requirements in Web Engineering”, Jose Alfonso Aguilar, Irene Garrigos, Jose-Norberto Mazon, The 11th International Conference on Computational Science and Its Applications (ICCSA 2011). “Requirements Engineering”, Elizabeth Hull, Ken Jackson, Jeremy Dick, Springer, 5 oct. 2010. “The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses”, Eric Ries, Crown Business (September 13, 2011), ISBN-10: 0307887898, ISBN-13: 978-0307887894

www.todaysoftmag.com | no. 22/April, 2014

35


programming

Perspectives on Object Oriented Design Principles

P

opular belief about entropy leaves no room for interpretations: the bigger the entropy the greater the chance for disorder and chaos to lurk their ugly heads at every step. This means unpredictability, which of course is not amongst the desired qualities of a good design. However, as we shall see in a minute, big entropy (I am referring here to Shannon entropy not the thermodynamics version, although there are similarities) is not a quality of bad design, in fact we are not able to say anything more about a design by looking at its overall entropy than that it solves a certain problem which needs that many states as the design allows. While this is counterintuitive, the only way to get a good design is to grow it towards increasing the entropy, as any attempt in reducing entropy will result in unwanted strong coupling and weird behaviour. The purpose of this article is to look at how some of the well known design principles influence local design entropy. I will start with Liskov Substitution Principle (LSP) as its influence on entropy is more straight forward. In a couple of words, LSP is a rule that helps programmers identify when to use inheritance. One of the most known examples of LSP violation is the „Squares and Rectangles” application. Imagine you’ve created an application that manages Rectangles. The application is so successful that users are requesting a new feature in order for it to also handle Squares. Knowing that a Square is a R e c t ang l e, your first design choice is to use inheritance (inherit Square for Rectangle) in this way you are re u s i ng a l l t h e functionality already implemented. class Rectangle { public: void SetWidth(double w) { width = w; } double GetWidth() const { return width; } void SetHeight(double h) { height = h; } double GetHeight() const {

36

no. 22/April | www.todaysoftmag.com

return height; }

the container with two liquids (one white and one black) separated by a wall. After removing the wall the liquids mix therefore increasing the entropy. From information theory point of view identifying the S quare overrides S etWidth and position of each particle relative to the SetHeight. separation wall requires much more information when the liquids are mixed.] void Square::SetWidth(double w) { There is also a definition and a formula Rectangle::SetWidth(w); for entropy: For a random variable X with Rectangle::SetHeight(w); } n outcomes (x1, x2, x3…, xn) the Shannon void Square::SetHeight(double h) entropy (represented by H (X) ) is: { private: double height; double width; };

}

Rectangle::SetHeight(h); Rectangle::SetWidth(h);

I will not go into more details on why this turns out to be a bad idea (more info can be found on Robert Martin’s Object Mentor (http://www.objectmentor.com/ resources/articles/lsp.pdf) but I will show how this influences design entropy. First, a couple of words on entropy. Shannon Entropy is a measure of the uncertainty associated with a random variable and is typically measured in bits. Yup, the same bits as in memory capacity or network throughput. If this wasn’t strange enough, find out that a single toss of a fair coin has an entropy of one bit! Entropy measures the quantity of information needed to represent all the states of random variable. For small quantities of information we can identify simple rules to represent all the states. Chaos however, or large random sequences have huge entropies they are an explosion of information and there are no simple rules to guess the next number in the sequence. [A short discussion about the similarities between thermodynamics entropy and Shannon entropy. In the end both represent the same thing. Use as an example

where p(xi) is the probability of the outcome xi. Let’s take some examples in order to get a grasp on what this means and why it makes sense to measure it in bits. [Depending on the base of the logarithm the entropy is measured in bits (base 2), nats (base e) or bans (base 10)]

Example 1 How much information is required to store variable X having possible outcomes in the set {0, 1}? Consider [That means 0 and 1 have the same chance, 50%, to be assigned to X].

Example 2 How much information is required to store variable X having possible outcomes in the set {00, 01, 10, 11}? Consider [That means all values have the same chance, 25%, to be assigned to X].


TODAY SOFTWARE MAGAZINE The formula tells us that

[Ask yourself if you know the definition of the bit. The definition is not straight forward and every programmer thinks he/ she knows it (which is kind of funny because it turns out they actually don’t).] This looks all nice and tight but how does it applies to object oriented design? Well, let’s go further with our analysis. How much entropy is in the Rectangle class? We can look at its fields, width and height, but we’ll use a more simplified case where they can take only values 0 and 1. It looks like Rectangle class is defined by a random variable XR={wh}, XR has possible outcomes {00, 01, 10, 11} each a different combination of width (w) and height (h) and we know from the second exercise that entropy equals 2. H(XR)=2 (bits) How much entropy is in the Square class? We can look again at its fields, width and height, which can only take values 0 and 1. It looks like Square class is defined by a random variable XS={wh}, XS has possible outcomes {00, 11}. Now the entropy is different because width and height no longer vary independently. Every time width (w) gets set, the height (h) gets set to the same value. We know from the first exercise that entropy equals 1 in this case.

• If class S extends class R by breaking the entropy rule then method m will have to account for the missing entropy in class S (read this as strong coupling, adding if statements to discriminate between Square and Rectangle is one possible scenario) [An important aspect is how the design grows the entropy because chaos and disorder inside source code also comes from how the entropy is structured, grown and used within classes]

Real world example

That’s a door. You’ve guessed! W h at d o e s a d o o r do? What is its behavior? Well… a door opens (if it’s not locked) and it uses the interior of a room to do it. Here’s a simple way to write it in code: class Door { void Open(Room r) { …. the door opens inside the room } }

Imagine the entropy of the room is proportional to its volume. What would happen if we extend the class Room by reducing its entropy (volume)? Let’s call this new class a FakeRoom. Well… the next picture is talking for itself. The missing information (entropy) in the room needs to be accounted for and gets coded into the door (by cutting out the bottom H(XS)=1 (bit) part so it can be opened). Now the door and the room are strongly coupled. You can Here is our first rule of how the entropy no longer use this door with another room should be allowed to vary in a design: without raising some eyebrows! Whenever class S (Square) extends class R (Rectangle) it is necessar y [Developers should understand their design t h at H ( X S ) > = H ( X R ) . In o u r c a s e will look the same as in this picture. My 1=H(XS)<H(XR)=2 ! What actually hap- advice is not to ignore the signs, a vase and pens when we break the „entropy rule”? flowers will not transform it into a good • Let’s say we have method (function) design.] m using objects of type R

[We can imagine a second example with a water pump and pipes of different sizes.] Conclusions: 1. L ower ing t he entropy by using inheritance is a sign of broken encapsulation. 2. Aggregation should be favored instead of inheritance. There’s no way to break the entropy rule when using aggregation. Entropy can be varied as desired. 3. Depending on abstractions is a good practice as an interfaces doesn’t have a lower entropy bound and allows for any customization. 4. As a mathematician would put it, the entropy rule for LSP is necessary but not sufficient, meaning that if we obey the rule we might get a good design, while if we break the rule this definitely leads to a bad design 5. Design entropy is a perspective that can augment already existing methods of detecting LSP violations

Cătălin Tudor

ctudor@ixiacom.com Principal Software Engineer @ Ixia

www.todaysoftmag.com | no. 22/April, 2014

37


programming

Machine learning in the cloud

O

ne of the most important factors that contributed to the huge successes of machine learning in the last 10 years was the increased computing power. In 2010, Dan Cireșan et al. have set the new state of art for handwritten digits using an algorithm developed in the 1980s and by augmenting the data set with a procedure described in 1990. The only difference was the amount of computing power: using a modern GPU they finished training in one day, which might have taken 50 days on a CPU. But in parallel with the increase in processor clock speed, the amount of information to be processed grew, at an even faster rate. To deal with these situations, many cloud based solutions have appeared, some offered by startups specializing in various areas of machine learning.

One of the first startups to offer machine learning services in the cloud was BigML, who have launched about two years ago. They started by offering decision trees and then they developed their product, with various improvements such as pruning strategies and ensemble methods. After we have trained a model, we can visualize them using two different types of diagrams, which help us see the influence Sunburst diagram for the same model each feature has on the outcome. One of the visualizations is an interactive tree where we can walk the tree, observing the decision that is made on each level. The other visualization is called a „Sunburst diagram”, which shows us how many data instances support each decision and what is the expected error for it. Ersatz labs are a more recent startup, still in private beta, but Their service can be used in two ways: from a web interface they plan to go public around April or May. and through an HTTP API. Their specialization is deep learning. They offer various models of neural networks, for which you can set various hyper parameters and then you start the training, which is done on GPUs. They analyze the data you upload and, using some heuristics, they suggest values for the hyper parameters that might work best for our data. With only a few adjustments, I managed to build, in 5 minutes, a model that recognized letters and digits from receipts with 80% accuracy. After the training is done, we can see how the values of accuracy, cost function and the max norms of the weights of the neural network evolved by each iteration. Using this information we can then fine tune our hyper parameters to obtain better results. Even though the beta invites are usually received in the same day, the fact that Ersatz is still in beta must be kept in mind. While testing the service, on the first dataset I uploaded I encountered a mysterious bug „Training failed”. I talked to the customer support and they told me that I had found a bug in Pylearn2, the Python library they use for neural networks. They solved it in 2 days, but A decision tree for a model of the grades obtained by a student even after that, the service had some hiccups. The models they offer so far are auto encoders for image data, recurrent neural networks for time series, neural networks with

38

no. 22/April | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE others are general enough to work in any domain. However, when you want to use it on a language that is not well „known” to them, such as Romanian, the service doesn’t know what to answer. AlchemyAPI can be used through the SDKs they offer for various languages, such as Python, PHP, Node.js, C++, Java and C# which can then be integrated into our applications.

The charts for accuracy and loss

sigmoid or ReLU layers, and convolutional networks with or without maxout.

PredictionIO is a bit different from the other products on this list. Even though it is developed by TappingStone, which offers commercial support for it, the actual product is distributed on GitHub under an open source license. PredictionIO is a recommendation engine, built on scalable technologies, such as MongoDB and Hadoop. With it, using historical data about the actions done by a user (such as viewing, buying or rating a product), we can recommend him other products he might be interested in. The engine has two components. The first one is a web interface where we can manage the algorithms we use for making recommendations. We can select various algorithms (or implement our own custom ones), set their hyper parameters and run simulated evaluations. The other component is an HTTP API (with SDKs for various languages) through which we can add new users, products and actions and then get recommendations. Using MongoDB and Hadoop makes PredictionIO quite powerful, but also more complicated. If you want to scale up from the default Hadoop configuration, which runs on a single machine, you are on your own with managing the cluster. For the other services listed here, when you need more processing power, all you have to do is click a button in the browser (and switch to a more expensive plan).

AlchemyAPI offers deep learning services as well, but at a much higher level than Ersatz. You don’t get to train neural networks yourself on your data, but they have pretrained networks for natural language processing tasks such as entity extraction, keyword finding, sentiment analysis, and finding the author and language of a piece of text. All this can be accessed through their API. They don’t offer much in terms of customization, most of the service being already implemented. As long as we only use the languages for which they have support, it will work quite well, because the problems of entity extraction, sentiment analysis and

Entities found by AlchemyAPI in a post about Stephen Wolfram new language.

These are only a few of the cloud based machine learning services. There are many others, ranging from Google Prediction API (which is completely closed and doesn’t say which algorithms it uses for making predictions), to ŷhat, which is exactly the opposite: they don’t offer you any algorithms, only a framework for Python and R, with which you can build scalable solutions.

Roland Szabo

roland.szabo@3pillarglobal.com Junior Python Developer @ 3 Pillar Global

www.todaysoftmag.com | no. 22/April, 2014

39


programming

Rapid Application Development for the Web with Oracle APEX

E

ver wanted to build a web application extremely fast, without the need of learning a new programming language? Ever wondered why it is still too complicated to quickly create web pages with forms and reports and why every Rapid Application Development tool out-there becomes “rapid” only after spending a few months learning it?

Well, there is a hidden gem called Oracle Application Express (APEX) that might just be the answer for one-off web application developers, database integrators and expert programmers alike. Using a highly declarative environment you can build professional web applications with a click-and-click approach. The real surprise here is that this tool comes from Oracle, a corporation famous for valuing expensive “locked” products. APEX is free (but still not open source) and has spawned of an internal Oracle project meant to make life easier for database developers and administrator. My experience with APEX began in 2009 while working for a Life Insurance company, as an Oracle database developer. The main task was integrating multiple software systems, both internal and customer-facing. Lucky enough, all software components were using Oracle databases; hard enough there was little time and resources to develop the user interfaces in Java, .NET, PHP or any other programming language and framework. The IT manager had little experience with APEX, but great faith in its capabilities. So it began a 3-years long intensive experience with APEX 3.2, at a time when documentation was sparse, the experts were just a few and just a dozen enthusiasts were talking about APEX in public events, forums or message boards. The result was a unique development experience, lots of documentation, guides and whitepapers created along the way and a book published in 2013 called “Oracle APEX Reporting Tips&Tricks” (available on Amazon, iBookStore and Barnes and Noble). Oracle Application Express, common known as APEX, is a Rapid Application Development (RAD) tool that has reached a certain high level of maturity with the launch of version 4.0 in June 2010. APEX combines fast development cycles for web-based applications revolving around an Oracle database with a strong developer base and dedicated “evangelists” promoting the technology. The programming technique is highly declarative in a web-based environment, with little programming effort required. APEX uses a unique concept that can be considered opposite of all the current web development trends. While nowadays everything in a web application must be as loosely coupled to the database as possible, with emphasis on the client side interaction, APEX has a radical approach, as everything is stored in the database, from the data to the meta-data that generates the web pages. A web server is used to generate HTML pages directly from the database, where both the web page data and metadata is stored. Although APEX comes free of cost, all the development is done within an Oracle database using unique Oracle concepts, with all the backend processing and most of the frontend processing being

40

no. 22/April | www.todaysoftmag.com

performed by the Oracle database stored procedures. An Oracle APEX web application is developed by using SQL and PL/SQL, although most of the development can be done in a declarative way by using the browser-based Development interface. It is a database-centric tool, meaning that it requires – and will only run on- an Oracle database. The history of APEX starts in 2004, when it was just an internal Oracle tool called HTML DB. In 2006 it was renamed to Application Express version 2.1, currently it is at version 4.2.4 and there is already an early adopter of version 5.0 released (https://apexea.oracle.com/i/index.html). Using APEX on an existing Oracle database instance, even the free Oracle XE one, does not require additional licensing and it is not restricted to number of developers, applications and endusers. It supports DB versions from 10gR2 up to the latest and it can be used with Exadata, ORA and RAC setups. By default, Oracle APEX is now distributed with all Oracle database editions. From an architectural perspective, APEX uses simple 2-Tier architecture. The webpages are dynamically rendered using the metadata stored in the database and there is no code generation or file-based compilation at any time. It basically runs wherever the Oracle database runs. APEX uses a multitenant hosting principle, organizing web-pages into applications and workspaces, which can use distinct or shared database. Although most of the underlying code is written in PL/SQL, getting started with APEX requires little knowledge of any programming language, except maybe some HTML and web basics. Being web-based, the development process consists of using a series of predefined pages and objects, from forms to reports and charts. All pages and components are based on of Oracle DB objects, usually tables and views, so a schema management tool is embedded in the project. Creating tables, views and stored procedures can be done from APEX, so the entire development


TODAY SOFTWARE MAGAZINE process can be encapsulated, at least at the early stages, within the product’s web pages. Accessing APEX is done by accessing a URL in a browser, whether you are accessing a locally installed version of Oracle APEX, an instance provisioned in a private cloud (SaaS) or the Oracle Database Cloud Service, Oracle’s own cloud service that relies on APEX for application development (http://cloud.oracle. com). However, Oracle APEX is a not a tool that suits any project. The most typical use-cases where APEX should be used are datadriven applications (opportunistic and departmental productivity apps), online reporting (SQL-based), spreadsheet web-ification (by transforming Excel spreadsheets into web apps) and access replacement (where APEX can be used as a central point of access to multiple Oracle database schemas).

are:

email services, translation services, authentication, authorization and logging and monitoring. For more details on how to get started with Oracle APEX, check out my “Oracle APEX Reporting Tips&Tricks” book (2013) on: http://www.apexninjas.com/blog/2013/06/ oracle-apex-reporting-tips-tricks-out-now/ Also, you can check out a simple demonstrative blogging platform built using APEX here: http://apex.oracle.com/pls/ apex/f?p=20559:101:

The main components of the APEX development environment

Application Builder, where applications and application pages are created declaratively using wizards. Each application is composed by one or multiple pages, translated at runtime by one or multiple web resources and each page is split into several regions. Each of the page regions can contain text, custom PL/ SQL, reports, charts, maps, calendars, web service references or forms. Also there are other objects that are specific not only to the pages but to the whole application, like application items, processes, computations, authentication and authorization schemes or navigation objects like tabs, lists or breadcrumbs. SQL Workshop, a tool that enables the management of the database objects. Ad hoc query, wizards for creating tables, view, stored procedures and other database objects make up a suite of features that will enable the developer to do schema management tasks from the browser-based APEX tool. Team Development, an integrated team management development tool, for tracking features, bugs and milestones. The tool is linked directly to the APEX pages. Administration, for account administration, the workspace and using a dashboard for workspace utilization. As most RAD tools, Oracle APEX provides easy development in a declarative means, by just using pre-built items, of the following components: reports, forms, charts, calendar, UI templates, navigation, validations, processes, computations, web services,

George Bara

gbara@sdl.com Business Consultant @ SDL

www.todaysoftmag.com | no. 22/April, 2014

41


showcase

Inside view of GPS Navigation

T

he current article is an overview about the changes that occurred in the skobbler navigation app for iOS platform in time and the current features set, high level architecture and the innovations introduced with the 5.0 version, which is available on the App Store since December 2013.

This product was first launched on the iOS market in October 2009. At that point in time, skobbler was the first one to use map data powered by OpenStreetMap, a small community of enthusiasts striving to change the whole approach on maps. The app delivered turn-by-turn navigation and audiovisual advisory, based on permanent internet connection. After almost two years of regular updates and improvements, the product got to 1,5 million users. But this wasn’t enough. Beyond delivering quality on present features, something else needed to be changed. Focusing on our users’ needs led to adding the offline functionality and switching the user experience completely with a new design approach. The new product was shipped on the market in October 2011. It was the industry’s first universal, online-offline („hybrid”) navigation app for iPhones and 3G-equipped iPads. The main focus was on the user. We intended to solve traveling issues regarding regular navigation apps that required roaming costs and huge downloads. And we wanted to deliver that through a clean interface that goes beyond form straight to intuitive function. A skeuomorphic but minimal design that took into account the driving context of navigation (with avoidance of small UI elements) was the proper solution.

For changing the UI of the app we did have more samples in the first place. One of the designs was more focused on the traditional style of navigation apps and was similar to the previous app’s menu. The second one took a completely new approach of the used color schemes and positioning of the buttons.

In the first place we tested the app internally. The user experience feed-back received from the team helped to insert some improvements in the final app. After weeks of development and testing the final app hit the market. The new UI doubled our number of downloads on the App Store in the first weeks after the release.

The new update of GPS Nav 5.0 was released in December 2013. GPS Nav delivers the most versatile voice-guided turn-by-turn navigation product and integrates new styles and Tripadvisor feature. It is a standalone iOS application with four and a half star ratings on the German App Store and over four million users.

42

no. 22/April | www.todaysoftmag.com

Equipped with iOS SDK that provides the mapping and navigation functionalities, the application uses skobller own maps (known also as OSM+ maps) based on OSM map data. The data extracted from OSM goes through the improvement processes and it is compiled into skobbler own map format. The feature set of the current version include the following: Map styles customizable map styles allows users to change the “skin” of the map depending on the weather or timing conditions. The available map styles are the day night style, gray scale and outdoor styles. For the navigation mode there is an additional speedcam style used to warn the users of a potential mobile or fixed speedcam. Offline functionality - The application provides the option to purchase and download map packages for countries and cities ensuring that the users have map display, routing, navigation and search functionalities without an Internet connection. Search – being used in both online and offline modes, the search done with an internet connection uses the Apple service for address searches and the Tripadvisor and skobbler’s own category search for POI’s. Without a connection, search relies on skobbler services such as the multi step search for address searches and the category search for POI’s. Travel guide - Based on Wikitravel content, the users have access to articles about countries, cities, interesting facts and POI’s


TODAY SOFTWARE MAGAZINE

displayed on the map. Individual articles are available both in online and offline mode. Navigation and free-drive mode Navigation can be started on an already calculated car route, providing the user information about the distance to destination, the estimated time to destination, street names, turns and distances, audio advices, current speed, speed limit and speed warnings. The free drive mode should be used when there is no specific destination to which the user navigates. It provides map display in 2D and 3D mode, speedcam detection and information about current street the user navigates on.

Speedcam - the data for speedcam detection is up to date, the static speedcams databases are updated on daily bases, the mobile speedcam databases are updated every five minutes. The service to detect static speedcams is free of charge; the mobile speedcam detection services can be purchased through In-App purchases. FCD (floating car data) - the app used on a daily basis by millions of users, collects a significant amount of floating car data. The information related to road networks, speed, directions, travel time is used to further improve the OSM+ maps. The Floating Car Data collected by the GPS Nav is close to seven million driven kilometers per day.

Project architecture

components. OSM+ - provides the map data required for map rendering, routing and navigation Core component - a cross platform C++ component responsible for map rendering (OpenGL), routing, navigation, searches. iOS SDK - an objective C++ layer (similar to the MapKit framework provided by Apple). It ensures the integration of the feature set supported by the Core component into any native iOS application.

iOS team @ Skobbler

GPS Navigation application is based on skobbler’s NGx technology and iOS SDK. The current section provides an overview about the project architecture and

www.todaysoftmag.com | no. 22/April, 2014

43


HR

IMAGINE - a study regarding Cluj IT companies

T

his article presents the results of a study regarding the way students think about ten well-known IT companies in Cluj. Being, most likely, future employees of these companies, their opinions may help IT companies to reconsider their policies regarding applicants, both for HR practices, as well as PR ones. Therefore, companies can better define their place in the IT local community. As a consulting company in Organizational and Managerial Development, working frequently with IT companies from Cluj, we wanted to find out more about this industry. For quite a while we have been talking about Cluj IT cluster, which wants to develop a real business community in this field. But how is this community? What is it made of and what are the relationships between its main actors? What are the problems that they have faced and what are the solutions they have come up with? Last year we conducted a research study, based on the discussions we had with HR specialists from Cluj IT business, to see how companies evolved, what the main challenges of the moment were and how IT community idea was perceived. We published the conclusions in the 13, 15, 16, 19 numbers of TSM magazine. At that time, by far the main problem IT companies from Cluj faced was the lack of qualified personnel. There were not enough specialized graduate students, compared to the companies’ need of employees. Various solutions were implemented and sustained by companies: professional conversion of some potential employees, to acquire knowledge and skills in programming, bringing professionals from other geographical areas or opening up branches around the country (usually where there are universities); even including students in programs that were meant to attract them as employees. Yet, the problem remains – there is still a tough fight for sharing human resources. As a result, companies have taken a series of actions related to visibility and working climate. We were interested, in this study, to find out the opinions of those who represent the prize of this fight: future employees. What is their opinion about potential employers?

44

no. 22/April | www.todaysoftmag.com

What do they know about these companies (we are interested in their subjective view that will cause them to make a choice in the future, not in a difficult to define objective reality)? For our study, we consulted professors from UBB, specialized in sociology and management. When we developed the questionnaire, we also took into account the recommendations of some IT employees (both new and more experienced), HR specialists and IT entrepreneurs. Methodology The first decision we had to make regarded the study participants. It was decided that they would be studying Computer Science both in UTCN and UBB, in both Romanian and English. They represent, by far, the most important sector of interest for employment in IT, even though not the only one. Then we had to choose students’ year of preparation. We considered that those in the first year of study know quite little about companies in Cluj, while those in their last year of college know too much, considering most of them are already employed; consequently, their opinion could be biased because of their job. As a result, we were interested in second year students. The next step consisted of choosing the companies to be studied. We used two criteria to increase the chance of companies being known by students: size (the number of employees) and longevity (number of business years in Cluj). In the end, 10 companies were selected: 1. AROBS 2. EBS 3. ENDAVA 4. EVOLINE 5. EVOZON 6. FORTECH

7. ISDC 8. IQUEST 9. SOFTVISION 10. YONDER What we finally did was choose the keywords that would describe any/all companies in the industry. We named them descriptors and we selected them after consulting various people from the IT business. We asked the responders to select a maximum of three descriptors (out of the thirteen available) for each company, because we wanted to see how the results polarized – which are the main characteristics of the companies, according to students. List of descriptors: 1. They offer a lot of training 2. They are secretive 3. They have a pleasant work climate 4. They are only interested in money 5. They are stable in the market 6. They are demanding 7. They conduct complex projects 8. They have valuable employees 9. They are very adaptive 10. They pay well 11. They have their own products 12. They rely on outsourcing a lot 13. They are arrogant All these steps resulted in a questionnaire with two main parts. In the first part, responders had to say whether or not they are familiar with the companies (whether or not they have heard about them). In the second part, they had to express opinions about the companies they know, using the descriptors we mentioned. We also collected demographic information about the college they attend, gender, and e-mail


TODAY SOFTWARE MAGAZINE

address, so that we could send the final and useful data for these companies, any results. feedback is welcome!

How it happened!

From February 26th to March 3rd, 400 students, from the above-mentioned colleges, completed our questionnaire; the number of participants makes the results significant and statistically-relevant. We would like to thank all students who completed the questionnaire, as well as all professors who facilitated this process. Each company included in this study received a detailed report about students’ perceptions of that company, thus allowing them to see how they are compared to their competitors on the market. In this article we don’t correlate the answers to the names of the companies, in order to keep their confidentiality. However, we hope that comparing the results, each company will be able to improve their recruitment or PR policies. Because our objective was to get relevant

About the results!

In order for companies to attract applicants, it is vital to be known by their future employees. This is the reason why the first question regarded whether the respondents knew each IT company specified. The answers varied from 25,3% to 80,2%. Considering that all IT companies work on becoming well-known and taking into account that most of them are old and/ or large companies, scores like 25-30% are, from our point of view, quite weak. Therefore, the image, marketing, or PR strategy should probably be revised. The results regarding the descriptors, as they were chosen by students, are ranged quite differently. We don’t think that the mean or the median is relevant in this case. However, it is interesting to compare the scores of the companies – this offers a better picture of where the company is

situated, according to the students’ subjective perceptions, compared to their competitors. The results are shown in the following table (the order of the companies in this table is random, so it is impossible to guess the score of each particular company; companies interested in the detailed results can contact us, although we will not reveal the identity of their competitors). E x t re m e s c ore s are highlighted to make it easier to identify the differences. In many cases, the ratio of opinions is higher than 2:1. In the case of arrogance, the ratio is higher than 8,5:1, even if the overall numbers are relatively small. When it comes to outsourcing, students have similar opinions (3,9 - 5,6%). With regard to products, a company stands out, 12,9% responders considering that it has its own products (compared to the weakest score of 4,1%). Interestingly, when it comes to the most sensitive characteristic, payment, the ratio of opinions, in the extreme case, is higher than 2,25:1. This means that a student would go first to company G, instead of company D, to get a job, with a probability of 225%. Of course, many other aspects can influence students to prefer an employer over another. Nevertheless, these perceptions are relevant and we think they represent food for thinking for the companies in this study, if not for others in the IT business, as well. Dan Ionescu dan.ionescu@danis.ro Executive director @ Danis Consulting

www.todaysoftmag.com | no. 22/April, 2014

45


showcase

L

Lumy.ro - usability testing

umy.ro is the first usability testing platform in Romania, launched by the REEA company – www.reea.net. It is for the first time in our country that the companies, but also the self-employed persons, have the possibility to find out the opinion of the users they are targeting through their web and mobile products. Following the tests, the platform will provide accurate information about the website or the Facebook application the developers have produced, even before launching it on the market. The Lumy.ro platform provides usability testing services upon request. Thus, utility tests are used in order to check the navigability of the website, the content relevance, the design efficiency and the easiness for the clients/ users to carry out certain critical processes within the website and/ or application. Within the platform, there are two types of testers: beginner tester and advanced tester. The only difference between these two types of accounts is the accessibility to different types of tests. Depending on their difficulty, for each test carried out, the user will receive a certain amount of lumy points in his account. There are five types of utility tests: heatmap, clickmap, preferences, feedback and video feedback. The prices are accessible and they are adjusted according to the available budget. For the amount of 50 lei, a company will get 17 video tests, of which a real feedback will emerge. The number of testers registered on the platform has reached almost 520 and it is continuously growing. Daniela Ferenczi, UX specialist within the REEA company, is the initiator of Lumy. The work on everything that Lumy means started in the month of June 2013, in Targu-Mures. The name comes from the Romanian word for “light” (lumina). The idea of this name came from the fact that, when you carry out a project, you are creating it in your own office, relying on your own knowledge and on the know-how of the team you are working with. On completing the project, your work and creativity concerning the usability are launched in the dark, without usability testing. Practically, you do not know whether your users have received a product or service that can be easily used, whether it makes sense to them or not, or if it offers them an experience, not a mere utilization. Lumy has the role of “illuminating” you in regard to the work of a UX Designer, of an Information Architect and even of a web designer, through the usability testing services it provides.

46

Implementation

The first step was, obviously, the project analysis, what we wished to obtain in the initial phase and where we wanted to get. There were many hours of brainstorming by all the teams involved, the completion of a wireframe which was the starting point. Then, we went on to implement the platform with its two versions, the one dedicated to testers and the other one for the companies, in several steps: • the testers’ version and the realization of video tests and feedback by the testers through the Lumy Recorder application; • the companies’ version; • the integration of preferences type tests, clickmap, heatmap; • test orders for the part dedicated to the companies; • the capitalization of the testers’ gains; • the English version of the platform; • upgrade on the foreign companies part; • API implementation for the iOS application. These steps were implemented by the .NET team. Simultaneously, the iOS team implemented the Lumy application for iPhone and Lumy Recorder for iPhone. At the same time, the website was adapted for mobile phones and tablets. Among the employed technologies, we could mention: Axure, MVC4, C#, MS SQL Server, Microsoft TFS, Node.js, HTML5, CSS3, Adobe Photoshop, Responsive Web Design, MailChimp, NearForums, Scrum Project Management. On the iOS part, the following programming languages were used: Objective C and C++ and they also used iOS SDK and OpenGL.

• for screen capture: screen capture filter, together with Ffmpeg • for audio capture: mapped Ffmpeg was used with Windows Core Audio for microphone detection and selection. The communication between the browser and the application is done through a local http server. The communication with the server is done through TCP. On the server part for storing the data stream, we used NodeJs, which actuates ffmpeg instances. The most difficult tasks were finding an efficient manner to capture the screen and the best possible compression of the audio and video data flow. Another challenging situation was mapping the microphone represented in the ffmpeg manner, with the ones already existing in the system.

Lumy Recorder for iOS

Through this application, the testers registered on the platform will be able to carry out feedback and video type tests, and the companies will get a real reaction for the mobile versions of their produced websites. The Lumy Recorder application for iPhone was achieved using the programming languages: Objective C and C++ and using iOS SDK and OpenGL. A team made of three developers worked over 600 hours to be able to publish it in Apple Store. The most complicated part of developing the application was the video capture. The native SDK does not allow video capture from the browser, so the developers had to find another way, using OpenGL. If you access this link http://www.youtube.com/watch?v=CeqXjD99z1o&feature =youtu.be, you will be able to see an example of a video test, done on a 5S iPhone. What is Lumy Recorder? The usability testing platform – Lumy Recorder is a desktop stand-alone Lumy – was completely developed, from application, which deals with recording concept to implementation, by REEA. some testing sessions, testings where the user’s answer is recorded in text format, Daniela Ferenczi his voice and the actions happened on his daniela.ferenczi@reea.net screen. The recorded data is instanteneously UX designer REEA, Lumy initiator sent towards a server, which stores them in a @ REEA centralised data base. The technologies used are the following:

no. 22/April, 2014 | www.todaysoftmag.com


management

TODAY SOFTWARE MAGAZINE

Improving - why bother?

I

recall facing this question myself five years ago, when I accepted the assignment to roll out an improvement programme on a 150+ IT services company. At the time it seemed to me a few months assignment which ends when “we write down our processes and then get CMMI certified”. What a simplistic, foolish view! Now I wish I could give some advice to the younger me… however, since that’s not possible, it may make sense sharing some lessons learned along the way - sometimes the hard way - hoping that someone else will benefit from them. What happened during the past 5 years was not only a learning experience but a complete change of mind-set with regards to quality and continuous improvement, thanks to the great people who guided and mentored me in this. In reality, this quest turned out to require 5 years of hard work, serious commitment not to give up and eventually changing the mind-set and the overall way of working in the organisation. Such programmes are no easy job. They are intrusive, hard to predict and often seen as “messing up” the normal way of doing things. So the question is normal: “Why bother?” Also, during the past few years I learned that there are many IT persons - engineers and managers as well, in small and large organisations, who are now asking the same question: “Why bother?” They are not asking this with the same words, of course, but with the same idea, with variations like: • “Things worked well so far, why change?” • “We do not want to get ISO nor CMMI certified, so what’s the point?” • “We are a small company, we hate corporate thinking, and these things are not for us.” • “We have no time this year, maybe next time.” • “Why spend money on this, instead we’ll hire two more coders.” • “We have been working on processes and nothing changed.” • “Someone else tried and failed.” • “We tried and failed.” • …and the list goes on. Sometimes the answer comes from prospects or customers who are only willing to partner with companies that can prove a certain level of maturity. This can basically trigger one of the two reactions: either (1) finding shortcuts to get that certificate, or (2) making an investment and optimise it in order to maximise the returns. Luckily, the majority of managers would choose for the

latter, but the scale of the other group is significant nonetheless. But the prospects (or customers) asking for such a proof of maturity are rarely interested in a “rubber-stamp” alone. They are interested in knowing that those bright success stories on our webpages can be repeated. They want to see that they have been analysed, their main ingredients for success are identified and available for future projects to be done together. This is a matter of key importance for the Central- and East-European IT companies. There is always someone out there offering the same services cheaper, or faster, or both. Therefore, the only chance we have to remain competitive in a crowded, volatile IT services market with global competition is through demonstrated and continuously improved quality. Quality that is not only plainly stated in our tagline. Quality that first needs to be DEFINED, then it needs to be MEASURED and DEMONSTRATED, finally, it also needs to be visibly IMPROVED in time. This is not achievable in a brainstorming meeting. This is achievable with improvement programmes that require both commitment and investment. The term “Quality Assurance” has a meaning much broader than we are used to, beyond testing, beyond Quality Control. The existence of a QA system can make a significant difference in the organisation. There is a common misconception about QA: in the vast majority of the cases QA is considered to be fulfilled by test engineers in a team. Even worse, they are often referred to as “the QA’s” who are not part of the development team, instead they test the results with a delay of a full iteration. Actually testing activities are part of QC (Quality Control) within the development lifecycle, checking the products/services that have been produced. QA is more important: it means ensuring quality of the products and services that are going to be produced and delivered. The difference is huge. Just as an example, QA also includes review activities, which are able to detect 8-10 times more defects than testing in the same time unit. It facilitates the reuse of experience, measurements and know-how, for instance increasing the www.todaysoftmag.com | no. 22/April, 2014

47


management Improving - why bother? estimation accuracy. QA checks that the people doing the work understand what they are supposed to be doing and are using the best practices. It verifies that measurement and reporting is done efficiently. This leads to more accurate planning and the list goes on. Any effort, time and money put in continuous improvement programmes must be considered an investment rather than a cost. And you should expect a significant ROI. Yes, an investment is needed; change requires time and money. So, most of the various corner-cutting mechanisms that exist do not lead to success; they do the opposite and turn the investment into a cost, a waste of time and money that will never be recovered. Some such approaches may include: • Just do it! - without training, without support, without vision nor a plan, • Buy a miracle tool that will sort it all out, • Buy the processes of someone else, • Have someone write the processes (preferably by next Monday), • Invent new processes, • Have the same person(s) responsible for defining, measuring, implementing and evaluating their own work and results. With a correct, healthy approach, however, significant optimisation is possible. Improvement programmes themselves can be optimised for minimising the investment and maximising the returns through some simple principles: • Start from the way things are done currently (improvement, not invention), • Document existing practices and identify bottlenecks, • Focus on what is important for the organisation, • Collect and analyse relevant measurements, share them across the organisation, • Involve staff in defining the needs for improvement, while showing management commitment to it. • Get professional help to get started, get trainings and coaching! So back to the original question - why bother with all these? 1. Because it facilitates the survival of the species called EastEuropean IT Services Company, especially for the sub-species characterised by Outsourcing. Because the IT industry - and especially the outsourcing business - in our region is heavily challenged as competition gets global and the continuously increasing freelancer communities are becoming an affordable alternative. We must be able to demonstrate continuous improvement over time in order to not be eclipsed by competition. 2. Because it promotes the most powerful competitive advantage: Quality. While competition might offer cheaper and faster services, there is something hard to compete on: Quality. Have you ever seen an IT company claiming that they don’t deliver quality? Of course not. Everyone claims providing high quality products and services, and most of them do. But it is extremely difficult to explain its definition. Because the definition of quality - if it exists at all - often comes down to measuring time and money. So the natural question is: does that mean that cheap and fast means high quality? The truth is this is not an easy task: few companies are able to appropriately define, measure and improve their quality, and these details can make all the

48

no. 22/April, 2014 | www.todaysoftmag.com

difference. 3. Because it reduces the heavy cost of reactive corrections such as fixing and rework. Implementing a proper QA system in the organisation ensures that “quality is not someone else’s business”. It ensures that everyone contributes to it, instead of leaving it to “QA’s” to deal with. It ensures that the know-how and best practices are shared and made reusable. It ensures that measurements are collected and consolidated to statistical information, increasing the accuracy of future forecasts. It ensures that quality is engineered from the beginning of the process and investment in trainings and planning is not wasted but it has a return. 4. Because it facilitates continuous learning. Learning from mistakes is fine. Every person, team and organisation makes mistakes and it is normal to collect and analyse the lessons learned in order to improve. But this doesn’t mean that we must make mistakes in order to learn. Sometimes we just cannot afford that. So why not aim for making things right from the first time? The usual constraint here is time: no time for reviews or for proper planning or for proper technical design or time for rework and fixing. In reality the time allocated for ensuring that things are done correctly the first time is recovered in triple amount at later phases of the project lifecycle by less testing, fixing and rework. There are a large variety of helping practices to rely on, such as reusing our past experience, our and others’ consolidated know-how, proven best practices and statistical information. 5. Because it is - or more precisely it can be - practical and applicable. It suits large and small: being a small start-up or being a large corporate, none is an excuse for not investing in improvement and both can benefit of it. a. Small companies are hungry for new business, for bigger deals, for larger customers. And often these prospects are demanding. They may simply ask about an SDLC model, which is not even that hard to answer. But, if they go deeper in details asking for explanations for why things are done in a way or another, then well… we need a solid rationale behind it. Most of the time the described SDLC contains the design + development + test phases well explained. What about planning? Monitoring? What about configuration management? Change management? Risk management? Planning without monitoring is a waste of time and effort. Monitoring without a proper planning is nonsense. And so easily a promising deal can turn wrong. b. Large corporations can generate significant losses if not optimised. Sometimes leading to the schizophrenic situation of having mountains of know-how inside and still no one can really use its value. Complex environments are subject to heavy dependencies - between various roles, departments and tasks. So there is no one to oversee the whole chain, strategy and plans are often get blurred between layers, leading to heavy overhead and confusion for the engineering teams. These result in delays, wait times and eventually rework. Now, multiply this with hundreds of brains & souls in an average corporation and the loss figures might be surprising. 6. Because it leads to tangible, measurable results. Just a few examples : a. Yearly decrease in TTM - from 15% to 23% gains


TODAY SOFTWARE MAGAZINE b. Yearly increase in early defect detection - from 6% to 25% gains c. Yearly reduction of field error reports - from 11% to 94% d. Overall business value (ROI) - from 400% to 880%

• Implement reviews as often as possible, upon as many artefacts as possible.

7. Because we have to sell our services and products to more and more demanding customers. It is nice to showcase the successes that we had. We must be able to demonstrate why they were successes, what made the difference. We should be able to showcase also “how” we made it successful. And more important, we should be able to ensure that doing the same will ensure repeating the success - that’s why we need repeatable processes.

As a conclusion…

These are just a few of the reasons for which it is worth bothering with improvement. Of course, a full-fledged large-budget improvement programme is not always affordable (even if it usually remains more affordable than a lost contract), but this should not be an impediment if there is will. Some small steps can be taken to start with: • Start from current work practices. Document the current SDLC. Identify the gaps. • Define objectives. Then act in small steps towards them. • Get professional advice. It can maximise the ROI and it can make it practical. • Organise trainings aimed to establish critical competences such as project management, process improvements, risk management but also for getting familiar with the model to be followed (be it CMMi, Six Sigma, ISO, TickIT Plus, etc) • Focus on people, as they are the only ones to deliver quality services and products, while a model or a set of processes are only tools to facilitate them to do so. • Always, in any work practice think according to the Sheward-Deming cycle: Plan-Do-Check-Act. • Organise retrospectives / lessons learned / post-mortems, collect measurements and analyse them.

With a strong commitment, it can become easier than it might seem. And it is definitely worth the effort, the results can be amazing. I consider myself lucky to have experienced such a journey from start to success - no, not to end, because such a journey never ends - and I am truly hoping that someday many readers of this article will be able to say the same. And then, of course, there is the opposite question: why wouldn’t you bother improving your business?

Tibor Laszlo tibor.laszlo@improving-it.com Partner & Consultant @Improving-IT

www.todaysoftmag.com | no. 22/April, 2014

49


programming

AOP using Unity

I

n the last number of Today Software Magazine we talked about the base principles of AOP and how we can implement the base concept of AOP using features of .NET 4.5, without using other frameworks. In this article we look at Unity and see how we can use this framework to implement AOP.

Recap Radu Vunvulea

Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest

50

no. 22/April | www.todaysoftmag.com

Before this, let’s see if we can remember what is AOP and how we can use it in .NET 4.5 without any other frameworks. Aspect Oriented Programming (AOP) is a programming paradigm with the main goal of increasing modularity of an application. AOP tries to achieve this goal by allowing separation of cross-cutting concerns, using interception of different commands or requests. A good example for this case is the audit and logging. Normally, if we are using OOP to develop an application that needs logging or audit we will have in one form or another different calls to our logging mechanism in our code. In OOP this can be accepted, because this is the only way to write logs, to do profiling and so on. When we are using AOP, the implementation of the logging or audit system will need to be in a separate module. Not only this, but we will need a way to write the logging information without writing code in other modules that make the call itself to the logging system,

using interception. This functionality can be implemented using the tools inside .NET 4.5 like RealProxy and Attribute. RealProxy is special ingredient in our case, giving us the possibility to intercept all the request that are coming to a method or property.

Unity

This is a framework well known by .NET developers as a dependency injection container. It is part of the components that form Microsoft Patterns and Practices and the default dependency injection container when for ASP.NET MVC application. Unity reached the critical point in the moment when it was fully integrated with ASP.NET MVC. From that moment thousands of web project started to use it. From a dependency injection container perspective, Unity is a mature framework that has all the capabilities that a developer expects to find at such a framework. Features like XML configuration, registration, default or custom resolver, property


TODAY SOFTWARE MAGAZINE injection, custom containers are full supported by Unity. In the following example we will see how we can register the interface IFoo in the dependency injection container and how we can get a reference to this instance. IUnityContainer myContainer = new UnityContainer(); myContainer.RegisterType<IFoo, Foo>(); //or myContainer. RegisterInstance<IFoo>( new Foo()); IFoo myFoo = myContainer.Resolve<Foo>();

When it is used with ASP.NET MVC, developers don’t need to manage the lifetime of resources used by Controllers any more, Unity manages this for them. The only thing that they need to do is to register these resources and request them in the controller, like in the following example: public class HomeController : Con troller { Foo _foo; public HomeController(Foo foo) { _foo = foo; } public ActionResult Index() { ViewData[„Message”] = „Hello World!”; }

}

return View();

public ActionResult About() { return View(); }

As we can see in the above example, Unity will be able to resolve all the dependences automatically.

Unity and AOP

Before looking in more details at this topic, we should set the expectation at a proper level. Unity is not an AOP framework, because of this we will not have all the features that form AOP. Unity supports AOP at the interception level. This means that we have the possibility to intercept calls to our object from the dependency injection container and configure a custom behavior at that level. Unity gives us the possibility to inject our custom code before and after a call of our target object. This can be done only to the objects that are registered in Unity. We cannot alter the flow for objects that are not in Unity container. Unity offers us this feature using a similar mechanism that is obtained by the Decorated Pattern. The only difference is that in

Unity case, the container is the one that decorates the call with a ‘custom’ attribute – behavior. At the end of the article we will see how easy it is to add a tracing mechanism using Unity or to make all the calls to database to execute using transactions. We can use Unity for objects that were created by Unity and registered in different containers or objects that were created in other places of our application. The only request from Unity is to have access to these objects in one way or another. How can Unity intercept calls? All the client calls for specific types go through Unity Interceptor. This is a component of the framework that can intercept all the calls and inject one or more custom behavior before and after the call. This means that between the client’s call and target object, Unity Interceptor component will intercept the call, trigger our custom behavior and make the real call to the target object. All these calls are intercepted using a proxy that needs to be configured by developer. Once the proxy is set, all the calls will go through Unity. There is no way for someone to ‘hack’ the system and bypass the interceptor. The custom behavior that is injected using Unity can change the value of input parameter or the returned value.

How to configure the interception?

There are two ways to configure the interception – from code or using configuration files. In both cases, we need to define custom behavior that will be injected in Unity containers. Personally I prefer to use the code, using the fluent API that is available. I recommend to use the configuration files only when you are sure that you will need to change configuration at runtime or without recompiling the code. Even if I recommend this way, in all the cases we used configuration from files (it is more flexible). When you are using configuration files, you have a risk to introduce configuration issues more easily – misspelling name of the classes or make a rename of a type and forget to update the configuration files also. The first step is to define the behavior that we want to execute when the interception of call is done. This is made only from code, implementing IInterceptionBehavior. The most important method of this interface is ‘Invoke’, which is called when someone calls the methods that are intercepted. From this method we need to make the call itself to the real method. Before and after the call we can inject any kind of behavior. Here, I would expect to have two methods, one that is called before the call itself and one after the call. But, we don’t have this feature in Unity. Another important component of this interface is ‘WillExecute’ property. Only when this property is set to TRUE,

www.todaysoftmag.com | no. 22/April, 2014

51


programming AOP using Unity the call interception will be made. Conclusion Using this interface we have the possibility to control the calls In this article we saw how easily we can add AOP functionality to any methods from the objects of our application. We have the to a project that already uses Unity. Implementing this feature is full control to make a real call or to fake it. very simple and flexible. We can very easily imagine more complex scenarios, combining IInterceptionBehavior and custom public class FooBehavior : IInterceptionBehavior attributes. { public IEnumerable<Type> GetRequiredInterfaces() Even if we don’t have specific methods called by Unity before { and after invocation, this can be very easily extended. I invite all return Type.EmptyTypes; } of you to try Unity. public IMethodReturn Invoke(IMethodInvocation input, GetNextInterceptionBehaviorDelegate getNext) { Trace.TraceInformation(„Before call”); // Make the call to real object var methodReturn = getNext().Invoke(input, getNext); }

}

Trace.TraceInformation(„After call”); return methodReturn;

public bool WillExecute { get { return true; } }

Next, we need to add this custom behavior to Unity. We need to add a custom section to our configuration file that specifies what type we want to make this interception for. In the next example we will make this interception only for Foo type. <sectionExtension type=”Microsoft.Practices.Unity. InterceptionExtension. Configuration.InterceptionConfigurationExtension, Microsoft.Practices.Unity.Interception.Configuration”/> … <container> <extension type=”Interception” /> <register type=”IFoo” mapTo=”Foo”> <interceptor type=”InterfaceInterceptor” /> <interceptionBehavior type=”FooBehavior” /> </register> </container>

In this example, we specify to Unity to use the interface interceptor using FooBehavior for all the instances of IFoo objects that are mapped to Foo. The same configuration can be made from code, using fluent configuration. unity.RegisterType<IFoo, Foo>( new ContainerControlledLifetimeManager(), new Interceptor<InterfaceInterceptor>(), new InterceptionBehavior<FooBe havior>());

From this moment, all the calls of IFoo instances from Unity container will be intercepted by our custom interceptor (behavior). There is also another method to implement this mechanism using attributes. Using attributes, you will need to implement ICallHandler interface to specify the custom behavior and create custom attributes that will be used to decorate the method that we want to intercept. Personally, I prefer the first version, using IInterceptionBehavior.

52

no. 22/April, 2014 | www.todaysoftmag.com



sponsors

powered by


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.