Issue 31 - January - Today Software Magazine

Page 1

No. 31 • January 2015 • www.todaysoftmag.ro • www.todaysoftmag.com

TSM

T O D A Y S O F T WA R E MAG A Z I NE

Computer skills to cheat

mmunication

JavaFX and RESTful Web Services co

Dealing with complexity through TDD and Agile 5 Tips for Useful Scrum Code Reviews Academy+Plus Boosting Agile in distributed teams Large Scale Text Classification

Internet of Things in the Java Universe What messaging queue should I use in Azure? Converging Documentation in a multimodule software project New Technologies – on the eve of Data Protection Day 2015



6 2014 overview and future plans for Today Software Magazine Ovidiu Măţan

8 15 Online Marketing & Technology trends you don’t want to miss in 2015 Călin Biriș

10 ACADEMY+ PLUS Daniela Buscan

12 Startup events in Cluj: 2014 review Mircea Vădan

16 JavaFX and RESTful Web Services communication Silviu Dumitrescu and Diana Bălan

20 Computer skills to cheat Cristian Șerban

22 Dealing with complexity through TDD and Agile Radu Ometita

26 Boosting Agile in distributed teams Tiberiu Cifor

30 5 Tips for Useful Scrum Code Reviews Alexandru Bolboacă

32 Converging Documentation in a multi-module software project Alexandru Albu

36 Large Scale Text Classification Cristian Raț

38 Internet of Things in the Java Universe Dănuț Chindriș

42 What messaging queue should I use in Azure? Radu Vunvulea

45 New Technologies – on the eve of Data Protection Day 2015 Claudia Jelea

47 Gogu and the alternatives game Simona Bonghez, Ph.D.


editorial

H

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

appy New Year !!! We are hitting the road this year, full of enthusiasm and eager to face new challenges. One of them is to achieve some new goals TSM has set. We plan to launch the TSM membership card and an online page dedicated to jobs. For the mobile clients, we are going to publish soon an application dedicated to Windows Phones. We have also set ourselves to organize new events such as a conference dedicated to Java experts, which will probably take place in summer. You can find more information on some of these subjects in the first article of this issue. Thank you for being with us and we thank our partner companies for their support. In the pages of this issue of the magazine we have published a series of articles which guide you towards a better organization inside the team, of which I would like to mention: Five practical pieces of advice for Code Review in Scrum, Decreasing complexity with TTD and Agile, The convergence of documentation in a multi-modular software project and Performance in distributed teams. The article entitled Software architecture opens the series of articles dedicated to this complex subject, namely software architecture. The Java universe is represented by two articles: Things in Java universe and JavaFX and communication through RESTful Web Services, article that continues the series on JavaFX. Security is a domain we all should take into consideration when developing any software product, as the article Competence or fraud? argues. We go on with an analysis of the messaging system in Azure and in the end you can enjoy reading the last episode of the longest TSM series, the one dedicated to Gogu’s adventures.

Ovidiu Măţan

Founder of Today Software Magazine

4

no. 31/2015, www.todaysoftmag.com


Editorial Staf

Authors list Tiberiu Cifor tiberiu.cifor@3pillarglobal.com Engineering Manager @ 3Pillar Global

Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): TBD marius.mornea@todaysoftmag.com

Alexandru Bolboacă

alex.bolboaca@mozaicworks.com Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

Radu Ometita

Alexandru Albu

Software engineer @ Fortech

Senior Developer @ ISDC

Cristian Raț

Simona Bonghez, Ph.D.

Software Developer @ Yardi

Speaker, trainer and consultant in project management,

radu.ometita@fortech.ro

alexandru.albu@isdc.eu

Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com

Cristian.Rat@Yardi.Com

Translator: Roxana Elena roxana.elena@todaysoftmag.com

simona.bonghez@confucius.ro

Owner of Colors in Projects

Silviu Dumitrescu

Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com

silviu.dumitrescu@accesa.eu Java Line Manager @ Accesa

Accountant : Delia Coman delia.coman@todaysoftmag.com Made by

str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207

mircea.vadan@gmail.com www.clujstartups.com

Diana Bălan

Călin Biriș

Java developer @ Accesa

Digital Director @ Loopaa

Diana.Balan@accesa.eu

Today Software Solutions SRL

Mircea Vădan

calin.biris@loopaa.ro

Radu Vunvulea

Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest

Daniela Buscan

dbuscan@pitechnologies.ro Account Manager @ PITECH+CONCEPT

Cristian Șerban

Levente Veres

Application Security @ Betfair

Design Lead @ Endava

Claudia Jelea

Dănuț Chindriș

Lawyer @ Jlaw

Java Developer @ Elektrobit Automotive

Cristian.Serban@betfair.com

claudia.jelea@jlaw.ro

Levente.Veres@endava.com

danut.chindris@elektrobit.com

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code www.todaysoftmag.ro www.todaysoftmag.com

www.todaysoftmag.com | no. 31/january, 2015

5


overview

2014 overview and future plans for Today Software Magazine

2

014 meant a period of maturation for our magazine, by the increase in the number and the quality of the published articles. In numbers, this was reflected in the increasing number of online loggings which has reached 7000 sessions per month and participants in the release events, which has gone up to an average of 70-80 attendants. The peak was reached in September, when the release event registered over 120 participants, and online during the IT Days period, when there were over 10,000 sessions per month.

www.todaysoftmag.ro - 2014 (romanian)

The main novelty in 2014 was the 100% change of our online image, by launching the new site. The visual identity of the magazine was adjusted to the current trends in design, and the back-end part was completely rewritten, including the administration part. It was an exercise of professionalism in the domain, carried out with the help of Gemini Solutions and the Subsign design agency. The year was concluded under the umbrella of IT Days, where we had over 200 participants. I will not go into details, since I enlarged upon it in the last issue of the magazine. I would only like to mention that during the three days, if we are to also take into consideration the two workshops, we enjoyed exceptional presentations and a friendly environment.

Projects for 2015 The Today Software Magazine member card www.todaysoftmag.com - 2014 (english)

Over the year, the events for releasing the magazine were not limited to the ones in Cluj. We met the readers of our magazine in Bucharest, Timisoara, Brasov, Iasi, Targu Mures. These events were hosted by the sponsors of the magazine but also organized together with our colleagues from Gemini Foundry or Cluj IT Cluster. We enjoyed a wonderful audience and we promise to come back. In Cluj, the release events were hosted, as always, by the sponsors of the magazine. A thing of novelty is the multitude of newly inaugurated headquarters and the release of the magazine represented a good opportunity to present the new space to the local IT community.

6

no. 31/january, 2015 | www.todaysoftmag.com

We are talking about the TSM card, an idea that appeared a long time ago, but which has only now materialized into a concrete offer. The TSM card addresses the active members of the Romanian IT community, those who wish to constantly improve their knowledge in the domain, to whom TSM can bring a plus of information and even more, on monthly bases. To be more specific, the TSM card offers you: The printed magazine – It will be delivered monthly, during the entire year, to the address you specify, through the Romanian Post Office. It is a comfortable way to ensure some quality reading every month through the printed magazine. As a matter of fact, the printed magazine will be available exclusively to the TSM members and to those who participate in the release events and partner events where it is given away. • The IT Days 2015 Book – Just like in 2014, we will print and distribute the book of the event. • Important discounts to the TSM events – We offer a 50% discount for Cluj IT Days and a 20% discount for the workshops organized by Today Software Magazine. • Discounts to the national IT events – the first such events are… even Mammoths can be Agile, where the owners of the card will get a 20% discount and Mobos, where there will also be a 20% discount. • Advice for article publishing in TSM magazine. • Carrying out community projects. Those who wish to get involved in developing some projects for the local communities such as: eLearning or an online platform dedicated to


TODAY SOFTWARE MAGAZINE

volunteers and volunteering organizations. The most active members will be rewarded with free invitations to different IT events. • The launching price of this card is 300 RON + VAT/ year and you will be able to order it online soon or, beginning from today, if you send us an e-mail to the address card@todaysoftmag.com. By purchasing this card you show, at the same time, your support for Today Software Magazine and its projects.

The job page

and a new section dedicated to trainings. In February, we are going to launch a We hope to have made you curious and new project. It is a job page, which will be we are waiting for you online, in the releavailable only online, with a separate link ase events, to become TSM members and from the main page. Thus, we are trying maybe even follow the future job page. to support the very dynamic It market, by quality adds which will be oriented towards its needs in a friendlier way than the currently existing solutions. We hereby encourage startups also, through a secOvidiu Măţan ovidiu.matan@todaysoftmag.com tion dedicated to them and a low price for advertisement publishing. The page will Editor-in-chief Today Software Magazine evolve in the future and we are thinking about re-publishing the online calendar

www.todaysoftmag.com | no. 31/january, 2015

7


marketing

15 Online Marketing & Technology trends you don’t want to miss in 2015

Y

ou can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something - your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.” - Steve Jobs

Looking back at 2014 we saw some trending topics and technologies that will change for sure the way we communicate, interact and invest in advertising. Based on our observations, connecting the dots, we give you our 10 Online Marketing trends, and our 5 Technology trends for this year and the years to come.

4. Companies will look for new Social Media channels

Online Marketing trends for 2015

As we know and noticed, Facebook changed the newsfeed algorithm and this reduced significantly the reach for organic posts from Facebook Pages. This is not a very good news for brands. In 2015 companies will be on the look for the next big (free) thing, or at least will invest more in growing their owned communication channels: blog and newsletter.

1. Mobile presence it’s a “must have”

5. LinkedIn will become more of a business-publishing platform

In Romania we already have 7,5 million active smartphone users, that’s around 37,5% of our country population. In 2014 business owners realized that they couldn’t ignore 40% of their clients or potential clients, so they started to think mobile. In 2015 we expect to see new online shops that enter the market with responsive websites, and the web shops that already are responsive will look for developing their own mobile apps.

Have you tried the new “Create a post” functionality? It’s awesome! Any LinkedIn user can now write personal posts that look very similar to blog posts. When someone is writing a post, all his/hers connections get notified about the post. We expect that in 2015 this functionality will be made available also for business pages on the platform.

2. More quality branded content

Facebook started working on a business collaboration platform to rival LinkedIn. Maybe we will see a beta version in 2015.

As the social landscape becomes more crowded with many commercial messages, but lousy content, smart companies will invest in differentiate them selves creating quality branded content. This will bring a big opportunity for storytellers to do some magic and show that Social Media does a good job at bringing new customers and nurture the best ones.

6. Facebook @ Work

7. Omni channel becomes more important

Today a regular consumer uses between 10 and 20 sources of information before making a decision on what to buy. Searching is the norm, friends opinions are gold, and offering a product experience is the new marketing. Companies can’t afford not to invest 3. The fight for video content in social media landscape in online marketing and will invest more and more in integrated Last year Facebook introduced auto-play for all the videos on advertising. the platform. In consequence companies and publishers alike started posting their videos directly on Facebook. This created a huge 8. Companies will spend more on Social Ads opportunity for Facebook to sell more video ads, and for brands The new Facebook algorithm is forcing companies to invest in to get more reach by creating video content. Facebook ads to reach their fans with their status messages. Doing This started a “war” with YouTube. We expect also Twitter and this, companies will find out that Facebook Ads are very easy to LinkedIn to join the fight for video content. create and precise to target, are cheap and offer extra interactions for free. This is a very good deal for any brand, so we expect to see

8

no. 31/january, 2015 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

more ads on Facebook, and maybe on other social networks too.

9. Companies will spend more on video ads Considering that Millennials (or Generation Y) don’t watch so much TV and spend more time browsing the web, big brands will be reschedule a part of the TV ad budget to online video ads. But as we know, in the online advertising space you don’t need big budgets to advertise, so now any company affords to deliver fancy video ads to their customers. We experimented last year with the Facebook video ads and the results were surprisingly good, so we expect to see more video ads on our PC, mobile and tablet screens.

10. Companies will start retargeting In 2014 we saw few examples of retargeting campaigns. This year we expect to see more smart retargeting ads and more brands joining the bandwagon. As a conclusion to Online Marketing trends we believe that 2015 will be the year of the mobile commerce, quality content, social and video ads. The consumers will be more targeted, more entertained and more informed on any platform he uses.

Tech trends for 2015 11. Health apps and wearables This year we think that the big technology companies will push to consumers to buy more wearables. The problem with these devices is that there are few apps for them and this makes them not so cool. One thing could save them: Health Apps. Having a wrist smart watch can help us to know vital information We are sure that this year will be a memorable one for the IT about our bodies. and Marketing industries, and it all depends on us to make the best of it. Enjoy it! Connect the dots and for sure next year we 12. Virtual Reality will be talking about how 2015 was the year of cool technologies, If 3D movies were a big thing when everybody could test smart people and social marketing. them in cinemas, Virtual Reality could be the next hit. Oculus rift is already something that any geek will die for to own and be integrated in video games.

13. Mobile Payments In USA people can already use Apple Pay and Google Wallet. It’s said that is the next thing in payments. In Romania we are still trying to educate the consumer to pay online with the credit card, so maybe Mobile Payments won’t be implemented very quickly in our country.

14. Beacons For marketing purpose beacons offer a big opportunity to know more about the consumers and give them relevant content and advertising. We expect to see more development with this technology.

15. Self-driving cars The most important news from CES 2015 was about self-driving cars. Audi sent an A7-based research model on a self-driving journey from the San Francisco Bay Area to Las Vegas. The A7 made it to Las Vegas without incident. BMW and Mercedes-Benz also are making big steps for this dream to happen.

Călin Biriș

calin.biris@loopaa.ro Digital Director @ Loopaa

www.todaysoftmag.com | no. 31/january, 2015

9


education

Romanian young people passionate about IT now have access to an innovative method of learning, delivered by ACADEMY+PLUS

T

he development of the IT sector creates more and more interesting, attractive jobs yearly for young people who have a passion for informatics. However, due to the increasing number of hirings over the past few years, companies have started to look for more complex profiles. At the same time, there are also candidates who understand that, in order to stand out, they need complementary professional training. This is the reason why several alternative learning methods have appeared at national level in the past few years. These provide the candidates with the experience of a specific specialization, either in a certain technology or in connected fields, such as marketing, management, design or social media. Currently, there is a gap in the field of IT-specialized human resources in the US as well as Europe, Romania included, which is caused by the number of students, on the one hand, and a lack of adaptation of skills to the market, on the other hand. If we take a look at the figures, we will see that there are about 100,000 employees in the field of IT in Romania and that in 2014 there were 17,148 job offers for various specialization stages in the biggest cities of Romania – according to the job offers published on specialized websites. Knowing this, we also have to remember the fact that the best technical high school or university graduates either go on to study abroad or work for well-known foreign companies. Last year, starting from these premises, PITECH+PLUS launched a revolutionary learning and training technique in IT. ACADEMY+PLUS is a new school based on an updated curriculum, adapted to the demands of the market, which uses an innovative learning method for students. The objectives of this school are to fill in the need of companies to hire people capable of understanding a project from a technical, managerial and sales&marketing perspective and to create a culture – a “geek culture” – that will draw people with good learning skills into informatics.

10

no. 31/january, 2015 | www.todaysoftmag.com

ACADEMY+PLUS was launched in the Romanian market as a partnership with school 42 in Paris. The latter is a “nursery” for 1,000 students per year who learn informatics following the rules dictated by the industry. The partnership signed in March 2014 started a project unique in the Romanian IT: a school open to anyone who is passionate about informatics, exclusively financed by the company PITECH+PLUS based in Cluj. The admission process to ACADEMY+PLUS is identical to that of school 42: the pre-selection, the check-in – an interview with the candidates who pass the pre-selection, the pool – a 28-day testing period and go/no go. At the end of the three 28-day evaluations, out of about 1,200 candidates on the platform, only 58 got go. The school inauguration ceremony took place in November 2014 and currently the students are learning through the projects launched on the e-learning platform put at their disposal. The selected students do not make up a homogenous, same-age group and do not necessarily have the same interests. Admission was open to all those who are passionate about informatics and apart from the results obtained at the testing, the team that did the recruitment did not take age (except in the case of minors) or previous experience into consideration. Thus, ACADEMY+PLUS now has fresh high school graduates as well as older people who want a professional reorientation towards IT. The unique character of this project is reflected in different ways. First of all, the school in itself is an organizational microenvironment. Secondly, students are provided with a technologically well-equipped space, a games room, brainstorming rooms and


TODAY SOFTWARE MAGAZINE In the short term, our objectives are to triple the number of students in 2015, to customize training modules that will increase flexibility and customization in accordance with each candidate’s training – so that those with a higher level of knowledge can choose a certain training mix – and to open branches in other Romanian cities. In the long term, the objectives of ACADEMY+PLUS are to increase the degree of collaboration between those companies that are looking for young talented people, to increase the performance standards in the industry and to discover through its method brilliant minds that are able to create the informatics products of tomorrow. are surrounded by professionals who are at their disposal as mentors rather than teachers. The psychological ingredients that determine the functioning of the whole without a timetable, without strict attendance rules and conditioning are self-responsibility and the development of decision-making skills. The school is based on the idea that there is creativity in IT and offers its students the freedom to experiment in a community that they themselves create. Regarding the method and the syllabus, there are a few particularities: • Peer-to-peer learning – the students learn to evaluate one another and eventually, the purpose is for them to gain analytical skills. • Team work – the evaluations as well as some of the projects are made in a team. This way, students are encouraged to communicate among themselves. • Practice – students are given all the necessary tools to go about an issue and then they solve an exercise through a practical approach. • Community – students are guided to create their own community with their own values.

Daniela Buscan

dbuscan@pitechnologies.ro Account Manager @ PITECH+CONCEPT

www.todaysoftmag.com | no. 31/january, 2015

11


startups

Startup events in Cluj: 2014 review

A

s in the previous 2 years, it’s time for a review of the past 12 months. In general, I feel that Cluj ecosystem is settling down; the “brownian movement” with support initiatives for startups seems clearer now and I believe we’re entering a phase of growth in which the “actors” are already known and they will keep on playing and improving their role. Mircea Vădan

mircea.vadan@gmail.com www.clujstartups.com

In the following lines, I tried a classification based on the size and overall purpose of these initiatives and, then, a reminder of several startups that got noticed in the last year.

In March we had the well-known Startup Weekend Cluj (http://cluj. startupweekend.org/), at its third edition. Beyond the large number of participants, surprising was that all the 4 winning teams were led by women, which could mean a step towards a maturing ecosystem and a gradual increase of female involvement (frequently discussed issue in more advanced ecosystems, there even exist Startup Pirates Cluj (http://cluj. investment funds focused on supporting startuppirates.org/) - its first edition took female tech entrepreneurship); place in February 2014. It was a weeklong event with international mentors and workshops designed to give participants a clear vision of what lean startup is. The following topics were addressed: business Transylvania Demoday (http:// model, customer validation, pitching, startuptransilvania.ro/demo-day/) has marketing for startups and investment. counted two editions last year (in April Basically participants came with startup and in December) and consisted of a sesideas and during the event they worked on sion of 5-minute pitches plus questions developing it under the guidance of those from the jury. Each edition has gathered 20 mentors. around 10 local startups invited to pitch. Regarding events we should note the emergence of several new concepts, so that almost every month was taken by one of them:

12

no. 31/2015, www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE addition to several pitches of startups, uni- focused on non-formal entrepreneurship versity research projects with potential for education with the aim to lead particibusiness spin-offs were presented as well. pants through the process of defining the Techsylvania (http://www.techsylvaconcept of business model, pitching and nia.co/) positioned itself as an event with developing a prototype. international perspective, focused on innovation in hardware technologies (examples of devices that could be tested at the event: jawbone, Google Glass, Pebble, Sphero, Onyx Beacons, Leap Motion, Withings, Startup Live Cluj (http://startuplive. EyeTribe Tracker). An interesting part of in/cluj-napoca/3) was the last weekendthe event was the hackathon on weara- based event of the year and it was closely Simplon Romania (http://ro.simplon. bles and connected technologies, in which connected with Transylvania Demoday; co/) organized with the support of Cluj teams could develop solutions to various some of the ideas presented there were Cowork and lasting for 6 months, started problems based technologies mentioned included in the pitching session of the in October and aims to educate particiabove. A new edition of Techsylvania will Demoday. pants on how to develop a tech product take place in early June 2015. (even if they don’t have any technical Counting as events, but this time smal- experience) and how to launch it on the ler and with higher occurrence, we should market. Basically, it is a mix between mention the explosion of meetups with an incubator and programming school. focus on various topics related to techno- Having a community orientation it also Another event, but this time dedica- logy and startups. So, every week, this fall, hosted meetups opened for other startupted to students was 3 Day Startup Cluj we had at least 3 meetups on topics like: interested people. (http://cluj.3daystartup.org/). In addi- Agile Development, Growth Hacking, tion to its focus exclusively on students, Bitcoin, How to Start a Startup Lectures, it is distinguishing by the fact that parti- SpartUP, UX/UI, Startup Lounge, Mobile cipants are selected after an application Monday (http://clujstartups.com/#events). and a face-to-face interview, with a limit Most of them took place at Cluj Hub and of 40 places. Similar in structure to other were promoted mainly through meetup. weekend events (pitching, selecting ideas, com. Of course, we must mention here Spherik Accelerator (http://spheriforming teams and then working, pitch TSM’s launch events, which addressed tech kaccelerator.com/) had its first application and judging), does not focus on MVP startups theme as well (thanks to Ovidiu). period in October and seven startups development, but rather on business valiIn terms of programs, thus lasting at were accepted to attend the 4-month prodation, mainly through interviews with least several weeks, we can mention three gram. Startups received office space and prospective customers and feedback from initiatives: were included in a series of workshops mentors. and mentoring sessions. The accelerator is a non-profit; for the services offered it takes only 3.14% equity only if the startup obtains funding in the following year. So Tandem by GRASP (http://tandem. after the program ends, Spherik will actiIn November, IT Days (http://www. mygrasp.org/) was organized by the Global vely help startups get a round of funding itdays.ro/), 2nd edition, had one of its Romanian Society of Young Professionals, and any subsequent accelerator profits will tracks on entrepreneurship. Thus, in lasting 8 weeks and having a format be reinvested in the program.

www.todaysoftmag.com | no. 31/january, 2015

13


startups Startup events in Cluj: 2014 review

In addition to these programs, we have to mention Startcelerate as well (http:// startcelerate.com/), an initiative born in Cluj, but present mainly in UK, it aims to connect IT companies that have resources available with startups and investors who need those kind resources. To this moment, a series of pitching and matching events were organized, including Cluj, and others are planned for the near future in several European capitals. In the last year, we also had a few startups that were awarded or accepted in various international acceleration programs: • CallerQ (http://www.callerq.com/ - “we help sales professionals to increase the efficiency of prospecting and provide analytics to sales managers”) participated in Warp Krakow program, supported by Hub:raum • Asiqo (http://asiqo.com/ - “mobile application that enables brands to interact with their global audience through advertisements”) - meanwhile closed, won Telekom Innovation Contest for startups based in Romania. • Evolso (http://evolso.com/ - “The dating app giving power back to the ladies”) participated and got funded by StartupYard Accelerator (Czech Republic);

14

• ZenQ (http://zenq.co/ - “ze way - “Online mockups made simple”) to say thank you and appreaciate your • Onyx Beacon (http://onyxbeacon. friends”) was incubated in TechPeaks com/ - “iBeacon CMS for retailers RSS (Italy) soon after Startup Weekend; Feed.”) • Project Wipe (http://www.pro• Rebs (http://crm.rebs-group.com/ jectwipe.com/ - “electronic glasses that - “Specialized software for real estate help people with visual disabilities în agencies”) orientation and obstacle avoidance”) • HipMenu (https://www.hipmenu. qualified in the Startup Spotlight finals ro/ - “food ordering mobile app”) (@HowToWeb) • Squirrly (http://www.squirrly.co/ • TeenTrepreneur (http://w ww. - “content marketing tool allowing you f6s.com/theteentrepreneur - “virtual to optimize content and measure its financial educațional game”), after parsuccess.”) ticipating in Startup Pirates și Startup Weekend, will be incubated by Watson Of course, in addition to the initiatiUniversity Accelerator (Colorado, USA) ves and startups mentioned, this spring we in 2015. will have a few more support initiatives, on which you’ll be able to read in the coming Beyond the well known exits - Skobbler articles. and LiveRail (acquired by Telenav and Facebook respectively), we can mention several other startups that are growing quickly or/and have proved some steady revenue: • DollarBird app (http://dollarbird. co/ - “smart calendar app for managing personal finances”) • Microstockr (http://www.microstockr.com/ - “app that helps you track sales on major microstock photography websites”) • TinTag (http://www.thetintag.com/ - “rechargeable tracking device for your lost items.”) • Mo qups ( http : / / mo qup s . c om

no. 31/january, 2015 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

communities

IT Communities

2

015 promise us a series of quality events and we are starting, how else, with the launch of the issue 31 of TSM. This month we are proposing Mobile Monday (Cluj) and Testing camp events from Iași and Timișoara. A new event that is started in Cluj is OWASP and we wish them a lot of achievements.

Transylvania Java User Group Community dedicated to Java technology Website: www.transylvania-jug.org Since: 15.05.2008 / Members: 598 / Events: 47 TSM Community Community built around Today Software Magazine Websites: www.facebook.com/todaysoftmag www.meetup.com/todaysoftmag www.youtube.com/todaysoftmag Since: 06.02.2012 /Members: 2068/ Events: 28 Cluj Business Analysts Comunity dedicated to business analysts Website: www.meetup.com/Business-Analysts-Cluj Since: 10.07.2013 / Members: 91 / Events: 8 Cluj Mobile Developers Community dedicated to mobile developers Website: www.meetup.com/Cluj-Mobile-Developers Since: 05.08.2011 / Members: 264 / Events: 17 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology Website: www.agileworks.ro Since: 04.10.2010 / Members: 437 / Events: 93 Cluj Semantic WEB Meetup Community dedicated to semantic technology. Website: www.meetup.com/Cluj-Semantic-WEB Since: 08.05.2010 / Members: 192/ Events: 29 Romanian Association for Better Software Community dedicated to experienced developers Website: www.rabs.ro Since: 10.02.2011 / Members: 251/ Events: 14 Tabăra de testare Testers community from IT industry with monthly meetings Website: www.tabaradetestare.ro Since: 15.01.2012/Members: 1243/ Events: 107

Calendar January 20 (Cluj) Launch of issue 31 of Today Software Magazine www.todaysoftmag.ro January 22 (Timișoara) TdT#28 - How was at EuroSTAR? m e e t u p . c o m / Ta b a r a - d e - Te s t a r e - T i m i s o a r a / events/129617852/ January 22 (Iași) Protractor, end to end testing for AngularJS meetup.com/Tabara-de-Testare-Iasi/events/218963478/ January 26 (Cluj) Mobile Monday Cluj #15: Mobile game development meetup.com/Cluj-Mobile-Developers/events/177047022/ January 27 (București) Entrepreneurs January’s meeting m e e t u p . c o m / E n t r e p r e n e u r s C lu b - C lu bu l - A nt re pre n or i l or / events/219800978/ January 29 (Cluj) OWASP Cluj-Napoca InfoSec Event 2015 owasp.org/index.php/Cluj#tab=Upcoming_events February 5 (Cluj) Drupal Cluj Meetup meetup.com/Drupal-Cluj/ February 10 (Cluj) UI/UX Cluj Meetup#12 meetup.com/UXUICluj/events/177042112/

www.todaysoftmag.com | no. 31/january, 2015

15


programming

JavaFX and RESTful Web Services communication

A

Silviu Dumitrescu

client application can access remote distributed resources. There are several ways to access these resources, but maybe the most portable is that of web services. In this article, we will talk about the REST services (Representational State Transfer), self-descriptive, modern services, with a Java API which has an extraordinary evolution in the last versions of the Java Enterprise platform. We will start by discussing some architectural aspects which are part of the understanding of the components of a distributed application that uses the web services.

Java Line Manager @ Accesa

The two tiers architecture

silviu.dumitrescu@accesa.eu

Diana Bălan

Diana.Balan@accesa.eu Java developer @ Accesa

This architecture has two essential components: A. The client application with the following features: • It directly accesses the data base and has the disadvantage of requiring code that has to be altered for different types of data bases. Thus, we can end up with a bottleneck in the case of certain demands of data which require an important traffic volume for the data transport. • It implements the logic of the application, with the note that it can be limited by the capacity of the client station (memory, CPU) and, in addition, it requires code that has to be distributed to each client. B. The data base server - A JavaFX client application consists of the following: • a component which contains the FXML files representing the front-end, the classes corresponding to the controller that performs the event handling, a class for launching the application, CSS files as well as formatting classes. The advantages of using this architec• a component which contains the ture are: entity classes, which are mapped to the • It is much more extensible than the tables of the data base. one tier architecture • a component which contains the • It combines the presentation logic, classes that carry out operations on the business logic and the data resource in data base, using the previous component a single system (DAO classes). • It can have a client on any host as • JPA, which is used in order to conlong as it is connected to the data base nect to the data base and to perform the through a network operations on it more easily. • It has fewer weak points that can generate errors than the three tiers

16

no. 31/2015, www.todaysoftmag.com


diverse system The disadvantages are: • Any alteration in the business strategy triggers a modification in the logic of the application, with implications on each client. This can be very costly and time consuming • Each client needs a connection to the data resource • It restricts or complicates the adding of caching, mirroring, proxy services or secure transactions • Since the business logic is on the client, the entire data base is exposed in the network

The three tiers architecture

It has the following structure: • The client application - much fewer resources are necessary on the client station. No alterations are necessary if the

location of the data base changes, there is less code to be distributed to the client stations • The application server - manipulates the requests from several clients, reduces the data traffic in the network • The data base server • The application on the server part is described as: • It will contain the business logic so as to carry out CRUD operations on the

TODAY SOFTWARE MAGAZINE data base found on the data base server applications. • The components will be published as There are two types of web services: web services • SOAP (Simple Object Access • The web services will be developed Protocol), using XML messages which by using Jersey, which is an implementadefine the architecture of the message tion of the JAX-RS specifications and its formats. These systems often • They will be deployed on an applicontain a description of the operations cation server, for instance Glassfish, and offered by the service, written in the they can be consumed by using HTTP WSDL file (Web Service Description Language), which is in fact a XML file. Thus, the application server will con• RESTful (Representational State tain a collection of RESTful web services, Transfer) is much more appropriate for which in their turn will communicate basic scenarios, with an ad-hoc integrathrough JPA with the data base server. The tion. They are much better integrated Glassfish application server provides the with HTTP than SOAP, they do not infrastructure for both JPA and JAX-RS require XML or WSDL definitions. They APIs. are based on JSR-311 specifications, and The advantages of using a three tiers Jersey is one of their implementations. architecture: The REST services use W3C and IETF • It allows an efficient usage of the standards (Internet Engineering Task connection to the data resource by using Force): HTTP, XML, URI, MIME. connection pooling • We can alter the business logic We will use the REST services for the without affecting the software of the integration through web and we will use client the SOAP services in enterprise applicati• It is a much more suitable architec- ons that have integration scenarios which ture for scaling and load balancing than require advanced capacities of the services others (QoS). • Scaling mostly affects the middle tier We will choose JAX-RS since the • The disadvantages are: services are easier to consume for many • It increases the network traffic types of clients, while it allows the server • It is much more vulnerable to errors to evolve and scale. Clients may choose to • The business objects must be consume certain or all the aspects of the designed to handle the integrity of the service and combine them with other web transactions services. REST applications are simple, lightweiIn spite of the disadvantages listed ght and fast since: above, the three tier architecture is often • The resources are identified through used for big applications. It is the one that URI, which provides a global addressing we will focus on in this article and in the way following articles on this topic. • A uniform interface is used for resoIn the following part, we will briefly urce manipulation discuss the web service paradigm, as a way • Self-descriptive messages are used, to clarify the ideas even for those who are or metadata for the resources less familiarized with this paradigm. • Stateful interactions through hyperlinks are based on the concept of explicit Web Services state of transfer They are applications which communicate through HTTP in WWW. They provide a standard which facilitates the REST indicates a stateless client serinter-operability of software applications ver architecture. A REST service exposes that run on a variety of platforms and many resources which identify the destiframeworks. The interoperability and nations of the interactions with the clients. extensibility are given by XML. They can be The resources are identified through URI combined in a manner that loses coupling and manipulated by four operations: PUT, in order to obtain complex operations. GET, POST and DELETE. By using the web services, a two tier The resources are detached from the application can be changed into a three tier representation so that they can be accessed one, which can operate over the web. The in a variety of formats: HTML, XML, plain application thus becomes extensible and text, PDF, JPEG and JSON. The metadata interoperable with different types of client on the resources is used, for example, to www.todaysoftmag.com | no. 31/january, 2015

17


programming

JavaFX and RESTful Web Services communication

control the cache, to detect transmission errors, to negotiate the private String name=””; /** most suitable representation format and for the authentication or * Default constructor. access control. */ public Hello() { Any interaction with a resource is stateless, the message is // TODO Auto-generated constructor stub thus self-contained. There are several techniques available for us } to convey the state, such as: rewriting the URI, cookies, hidden /** fields. The state can be included in the reply message in order to * Retrieves representation of an instance of * Hello create future states of the interaction. * @return an instance of String The clients of the web service who wish to use these resources */ @GET access a certain representation by transferring the content of the @Produces(“text/plain”) application by using a little set of remote methods which describe public String sayHello() { // TODO return proper representation object the action that needs to be done on the resource. return “Hello World”+name; • GET is used to obtain data or to make a request on a } resource. The data returned from the web service are a repre/** sentation of the requested resource * PUT method for updating or creating an * instance of Hello • PUT is used in order to create a new resource. The web * @param content representation for the resource service may respond with data or a status which indicates the * @return an HTTP response with content of the * updated or created resource. success or failure */ • POST is used in order to update the resources or the exis@PUT @Consumes(“text/plain”) ting data public void putText(String content) { • DELETE is used to erase a resource or data. name=content; In certain cases, the update or delete actions can be done through POST (for instance, when the service is consumed by browsers which do not support PUT or DELETE). The following mappings can be applied for PUT or POST: • Create = PUT, if we send the entire content to a specified resource (URL) • Create = POST, if we send an order to the server in order to create a resource subordinated to the specified resource, by using server-side algorithms. • Update = PUT, if we update the entire content of the specified resource • Update = POST, if we ask the server to update one or several resources subordinated to the specified resource.

Developing a REST web service with JAX-RS JAX-RS is a Java API: • Designed to facilitate the development of applications using the REST architecture • It uses Java annotations in order to simplify the development of REST services • It uses runtime annotations, which, by reflection, will generate helper classes and the artefacts for the resource. Jersey implements the support for annotations defined by the JAX-RS specifications. An archive of the Java EE application containing the JAX-RS resource classes will have the resources set up, the helper classes and the artefacts generated and the resource exposed to the clients by the deploy of an archive on the application server. Let’s take the following example of a file representing the code of the root resource class of a REST web service, which uses JAX-RS annotations: package com.example.ws; import import import import import

javax.ws.rs.Consumes; javax.ws.rs.GET; javax.ws.rs.PUT; javax.ws.rs.Path; javax.ws.rs.Produces;

@Path(“/hello”) public class Hello {

18

no. 31/january, 2015 | www.todaysoftmag.com

}

}

The used annotations are: • The value of the @Path annotation is a relative URI. The Java class will be, in the case of our example, on the path given by the URI /hello. The URI is static, but we can also include variables in it. The templates for the URI paths are URIs with variables included in the URI syntax • The @GET annotation is designed for the request method, together with @POST, @PUT, @DELETE, @HEAD and it is defined by JAX-RS, corresponding respectively to the similar HTTP methods. In our example, the annoted method will process the HTTP GET requests. The behavior of a resource is determined by the HTTP method the resource replies to • The @Produces annotation is used in order to specify the MIME media type that a resource can produce and send to the client. In the case of our example, the type is: text/plain • The @Consumes annotation is used to specify the MIME type that a resource can consume and which has been sent to the client. In Eclipse, the creation of the file structure is done by following these steps:


TODAY SOFTWARE MAGAZINE The three tier architecture using REST is illustrated in the following picture:

The project in which I created the resource is a Dynamic Web Project where: The checking of the service is performed at the address: http:// localhost:8080/JerseyFirst/jaxrs/hello. This name is derived from the value of the <display-name> tag from the web.xml, completed by the value of <url-pattern> of <servlet-mapping> (automatically generated) and the value of the @Path annotation.

The steps for generating a REST web service are: 1. Checking the following conditions • Jersey is added to the project • JAX-RS API is added to the project 2. The proper generation of the web services: • The creation of the REST services • The validation of the generated web service classes • The validation of the configuration in the web.xml file When we are testing a web service we need to take the following things into consideration: • The URL address correctly represents the endpoint of the deployed service and the annotations of the method • The GET, PUT, DELETE or POST requests invoke the appropriate methods of the service • The methods return the expected data

Creating a client

The steps to be taken in order to develop a client of the REST Jersey contains a REST library which can be used for the tes- web service are: ting or creation of a Java client. We will create a Client Project • Ensuring that the project has all the necessary libraries Application, with the following code: added • Identifying the GUI window and checking the place where import java.net.URI; the results of invoking the web service will be displayed import javax.ws.rs.core.MediaType; • The following information is useful when developing the import javax.ws.rs.core.UriBuilder; client: the URL of the service, the name of the package and the import com.sun.jersey.api.client.Client; class where the client code will be generated import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; • Invoking the code in the GUI window import com.sun.jersey.api.client.config.ClientConfig; import com.sun.jersey.api.client.config.DefaultClientConfig; public class Main { public static void main(String[] args) { ClientConfig config = new DefaultClientConfig(); Client client = Client.create(config); WebResource service = client. resource(getBaseURI()); System.out.println(service. path(“jaxrs”).path(“hello”).accept(MediaType.TEXT_ PLAIN).get(ClientResponse.class).toString()); System.out.println(service.path(“jaxrs”). path(“hello”).accept(MediaType.TEXT_PLAIN). get(String.class));

We will come back in the future issues of the magazine with more complex applications, which will also include the interaction with a data base. We are looking forward to your questions!

} private static URI getBaseURI() { return UriBuilder.fromUri(“http://localhost:8080/JerseyFirst/”).build(); } }

www.todaysoftmag.com | no. 31/january, 2015

19


programming

Computer skills to cheat

O

ver 10 years ago, a one day security conference was organized at my university. I wanted to participate, but there were limited places, so they created a registration page which, they said, would open next day at 12 o’clock sharp. I really wanted to participate and especially as they advertised a free T-shirt for the first 20 registrations. Being a pretty good developer at that time, I took a look at the site, found a vulnerability and managed to register myself before the registration opened officially. The next day, I show up to the conference entrance, I say my name, the guy checks me out on the list, I take a quick snoop and I see myself on top of the list, next to my name it says registration time 11:58. I smile :) He says “ahhh... you’re the one... how did you do that??” I ask: do I get a T-shirt ? He says no, you get something better and later he awards me publicly a book: Writing secure code by Michael Howard si David LeBlanc. I started wondering why he is giving me the book, he needs that book more than me! He needs to learn how to write secure code! not me! Now, if you translate this incident in a more critical environment like an online company which allows customers to create accounts, deposit money in their account and do stuff with their money, for example play games, or place bets, the T-shirt and the book are replaced with something else. Instead of a T-shirt, the attacker aims to get thousands of pounds and instead of a book he gets years in jail. Take this guy for example Alistair Peckover, 20 years old. In 2009 he was sentenced to 26 weeks in prison, suspended after stealing 39K, in 2010 he was sentenced to 20 months in jail after he bought a Porsche and gold bullion worth of 30K. This time he changed his name. And again, in 2012, he gets 3 years in jail after stealing 46k. His judge says “I believe that I will see you again in the future due to your gambling addiction and the temptation to use your computer skills to cheat, which will be hard to resist due to your character.” Computer skills to cheat caught my attention. It seems like the judge believes this guy is some kind of computer genius or expert. He was living with his parents, no school, no job, having all the time in the world available to him, he was researching 24

20

no. 31/january, 2015 | www.todaysoftmag.com

hours a day all the gambling sites on the internet to find games written by developers that haven’t read the above book. He was finding vulnerabilities in games and exploiting them for real money. After a lot of practice, he was an expert in his field, just like I was a pretty good developer after writing code full time for 2-3 years at the time. I knew how a website works and knew how to manipulate it to do what I needed. So, why do these bad things happen? Well, like everything else, because of many reasons, to name a few, firstly, because people are greedy and want to steal money and secondly, because software is written by developers which are humans and it is natural for humans to make mistakes. Only robots and computers don’t make mistakes, except when they are faulty or overheated or programmed by humans, you know what I mean. These security vulnerabilities are nothing more than coding mistakes. Developers didn’t learn much about security vulnerabilities in school and product owners are not very familiar with them, either, so they ask the developers to implement the happy flow only as quickly as possible. If they deliver the software before the deadline, then they get a bonus! Getting to know about these vulnerabilities is not difficult; anyone can do it, if they have the time. We, Appsec people, are fortunate people who have found someone to pay us to spend the time researching these fascinating software vulnerabilities. I say fascinating because many people see them as something magical, something that only geniuses can see, but in fact all you need is time and focus. We identify problems, we advise developers how to fix them and we train developers to avoid them the next time. To accomplish this, we have implemented several steps, some people call it SSDL: 1. We work close with the architects and contribute to the design of a new product, before it is being implemented. 2. We try to stay close to the developers and have visibility of their development sprints so that when they identify security sensitive user stories, they can consult us on how to implement correctly.


programare

3. We also perform a security assessment of all code changes before it’s going into production. This consists of reviewing implemented user stories, reviewing source code and performing a penetration test for those new features that have been implemented in the current sprint. In the office in Romania, we have around 16 development teams and we are 2 application security guys. That is a lot of work. So, we asked for help. Who do you think we asked for help? We asked the developers. And we said: you guys know

TODAY SOFTWARE MAGAZINE your code the best, you know how your application works, you know what every line of code does, because you wrote it. We have a deal for you, we will teach you which the security vulnerabilities are and how you can avoid them from the beginning, when you write the code. So, now we have a virtual team of security champions made up of at least one developer from each scrum team, some testers and representatives from other teams like devops and IT. We have regular internal security conferences where we have technical presentations, workshops, sometimes we have people from outside. Sec champions are very effective because this way we have at least one person in each development team who is thinking about the security implications. We teach him to be a hacker, to use the tools to test his own product and to write code that is more difficult to hack. It is a win-win situation for everybody, the individual, as he enhances his ninja skills, the company and the security team. So, now we can go to the beach and relax, as the developers are doing a good job and hackers can’t do bad stuff. The only problem is that from time to time we get a call from HR informing us that another developer has been promoted to management and they hired 5 junior developers to

replace him. And we have to take the first flight back and train them to write secure code.

References 1 h t t p : / / w w w. c r a w l e y n e w s . c o . u k / Broadfield-hacker-jailed-46-000-fraud/story17502872-detail/story.html 2 http://www.amazon.co.uk/Writing-SecureCode-Best-Practices/dp/0735617228/ r e f = s r _ 1 _ 1 ? i e = U T F 8 & q i d = 1 4 2 11 5 8 8 4 1 &sr=8-1&keywords=writing+secure+code

Cristian Șerban

Cristian.Serban@betfair.com Application Security @ Betfair

www.todaysoftmag.com | no. 31/january, 2015

21


management

programming

Dealing with complexity through TDD and Agile

I

n the last couple of years, project complexity has slowly (and recently not so slowly) risen to a level where the previous ways of dealing with it seem no longer effective. In this first part, I will share some of the reasons why I think complexity is here to stay and also why I think it will continue to raise the bar on what acceptable software means.

Radu Ometita

radu.ometita@fortech.ro Software engineer @ Fortech

22

no. 31/2015, www.todaysoftmag.com

Complexity is primarily linked to Moore’s law and to the incredible growth of computing power available today. This has allowed software systems to start tackling problems of increasing complexity with great ease, despite the programming paradigm having evolved at a significantly slower pace. Even with the advent of multicore processing, we have just recently started to feel the jump in software complexity when talking about concurrent programming. Since Moore’s law has hit its plateau and it looks like no one is willing to invest in giving single cores more processing power (if not even stripping them of the power they have) we are now faced with the dilemma of how we can easily scale our applications. And it looks like there is no easy answer to this question, no answer that we can adopt while at the same time maintaining our current programming style and paradigms (mostly suited to single core applications). The increases in processing power from this past decade have also raised the bar on what the business expects applications to do. And one of the most interesting changes we’ve seen is that the software needs to be a lot more malleable and amendable to a fast pace of changes. This is probably the main driving force behind Agile and the push for tighter feedback loops. So, the reason for complexity is twofold. First, the business needs faster and faster

response times to requirement changes, an aspect illustrated by Agile methodologies’ having taken over pretty much all projects and software shops. I have not heard of anyone brave enough to attempt a waterfall methodology on a complex project these days. On the other hand, we have to deal with massively concurrent applications which add a lot to the complexity of the craft of software development and we can see this in the resurgence of functional programming paradigms in most (if not all) of the mainstream languages.

A framework for complexity

Understanding why old approaches seem to fail miserably when applied to very dynamic projects requires a better understanding of what complexity is and how it can be classified. One such model is the one employed by the Cynefin Framework. They classify complexity in 4 domains, each of them with its own ways of managing it. It is worth noticing though that what works for a complexity domain does not necessarily work for another. The idea behind these complexity domains is that complexity should be driven down from complex to complicated and maybe even simple domains. But the way you turn a complex problem into a complicated one is different from how you change a complicated problem into a simple one. In the end, it is all about constraints. Constraining a complex domain


TODAY SOFTWARE MAGAZINE

Sursă fig. - quarterview.com/?p=1091

yields a complicated one and, if you constrain it further, it yields a simple domain. This can also happen the other way around (due to a chaotic event, or a change in specifications, direction or all of the above), in which case you should watch for the signs that allow you to properly assess the complexity domain you are dealing with and treat it accordingly. Following, there is a short description of the complexity domains as defined by the Cynefin Framework.

Simple

as well, which is, however, quite obscured by the multitude of possible solutions and definitely not as clear as in a simple domain. Here we first assess the situation, then (since it is not obvious which solution is best) domain experts must do an analysis and then, we respond to it by implementing the solution agreed upon. As far as programming tasks go, we can place here repetitive work that still requires some analysis of the situation. A nice example would be designing a CRUD application. This requires knowing a couple of frameworks, databases, etc, but there are a lot of established Good Practices that one can follow. At this level, the requirements are fairly stable and one can still get away with a waterfall approach. For example, a complicated task would be one that involves writing device drivers. They are quite complicated, but there are a lot of good practices that can guide you to a good solution. The specifications will not change too much since they are tied to the hardware interface.

Simple contexts are characterized by the fact that the correct answer to the problem is obvious. Some examples of this might be writing a Java Bean, or something that can only be done in one way. This is the area of Best Practices, where the causality of an issue is clearly understood. At this level, the way to engage in solving a problem is by first assessing the situation, categorizing it and then responding to it in a preset manner. As far as programming goes, these can be easy tasks almost anyone who can follow a checklist can do. As a sample application, think of the type of code Complex that anyone can write (even children or The complex domain is what we have non-programmers). started to see a lot more lately. At this level what we get are fast changing requireComplicated ments as an answer to external pressure. The complicated domain is a bit Any change to the specification will cause less restrictive than the simple domain a feature to be developed, and after impleand allows for alternative solutions to a menting it, that feature may change the problem. This is the domain of expert requirements again (after analyzing the knowledge. Since you can have more than impact it has on the user base or other one good answer here, this is the domain components of the application). This is the of Good Practices. There is a causal link domain of emerging design, and we can

only talk about causality in hindsight. At this point, the waterfall approach will stop working due to very large feedback loops, and Agile methodologies start to get a lot of traction. Everything is geared to support fast changing requirements. The biggest problem with complex contexts is that we are just now beginning to transition to them and we tend to consider them as being just complicated. A good sign that this happens is trying to enforce a specification document, the emergence of a lot of rules and regulations trying to control the apparent chaos, frustration caused by not understanding how the systems are expected to evolve and not having a handle on things. Despite all these, complex systems require exploratory drilling, and you need to treat them as such. In complex domains you need to encourage exploratory drilling. This means that you need to setup your project, application and process to allow for multiple cheap failures while attempting to get to the desired end result. At this point, we are talking about launching an experiment, assessing the results following the experiment and then, if we consider them to be satisfying, integrating the solution into the application. To achieve this, you need a couple of behaviours implemented: • Encourage communication. Having the members of the team talk to each other is very useful here to encourage the emerging behaviours by using the collective experience in problem solving as much as possible. • Setting barriers. You need to setup a framework of constraints within which the system can evolve. Knowing these boundaries makes it clear where we can move and innovate and helps diminish the frustration caused by the fast pace of change. • Reuse solutions. Try to find good solutions to common problems and reuse them across the system. Finding out these good solutions provides more structure/constraints to the problem, and hopefully helps to demote it from the complex domain to the complicated one.

Chaotic

The chaotic domain comprises exceptional circumstances. This is a fairly rare occurrence and can be extremely dangerous for your application and business. For example, a huge security bug that was

www.todaysoftmag.com | no. 31/january, 2015

23


programming found in your running system can cause chaos in the system. Under such circumstances, the first thing you should do is control the damage. This means to shutdown the servers, pull the network cable, call your lawyers, whatever you can do to minimize the impact of the incident. As a follow-up to a chaotic context switch (if the company survives it), process improvement and innovation can come without opposition, since everyone can accept change easily to prevent such events from happening again.

The evolution of the Agile practices The early days At the time of their inception, the Agile practices were born precisely to alleviate some of the problems that the complicated and, to a lesser extent, the complex domains raised. At that time, the preferred methodology would have been something akin to waterfall, which was beginning to fail due to the raise in complexity of the developed software (if it ever worked at all). The main problem with waterfall was the ‘specification first’ approach to software development and the fairly long release cycles. While these can surely work for both simple and complicated domains, the large feedback loop would prevent client involvement and lead to major rewrites of software from version to version. The waterfall method was (and still is) a very comfortable and intuitive approach that creates a false sense of security since everyone goes about their jobs, performing admirably, according to specifications. However, when the deadline comes and the software was not exactly what the customer dreamed about, everyone has

Dealing with complexity through TDD and Agile an excuse. So, this is pretty much why this process is comfortable. You are not accountable for failure. No one is. As opposed to the Waterfall methodology, the Agile approach assumes time boxed incremental changes which match the theory much better with the problems in the complex domain and even in the complicated domain, where the tight feedback loop means a more accurate image of the client’s expectations regarding the solution. This tight feedback loop and the idea of non-final, changeable requirements are the main reason why Agile has flourished both in complex and in complicated domains. In the complex context, you cannot know the solution to the problem before solving it. All you can do is to apply incremental changes, assess their outcomes and keep them or throw them away. Your solution will emerge from all these experiments at some point. This is basically the reason why Agile was such an overwhelming success initially. Early success with Agile has however bred complacency and people thought the process was an easy one. For simple and complicated domains it obviously is, because in these contexts we can rely on slow changing requirements and somewhat clear cause and effect connections. This complacency created a context which turned Agile into a set of good practices, and everyone relaxed because now they understood agile. You just need to have stand ups and the sprint retrospective and we can forget everything about the forces that drove to the creation of Agile. Incidentally, the same happened with TDD. Due to the simple or complicated nature of the projects TDD was reduced to the requirement of having a set of unit tests for your code to give an acceptable

Young spirit Mature organization A shared vision Join our journey! www.fortech.ro

24

no. 31/january, 2015 | www.todaysoftmag.com

degree of assurance as to the correctness of the code. And it got really easy. You just need to have (close to) 100% coverage. No need to think too much about the way you structure the production or testing code. And this felt professional and all was good with the world. But then change happened.

We are not in Kansas anymore

You can see this change happening because a lot of people have started saying that Agile and TDD and all those nice little practices that made us feel very professional are failing badly. So what is happening? Software started to need to solve complex problems and for the first time we had the hardware that can do it. The downside is we lacked a clear understanding of the complexity model behind our requirements and assumed that we can do more of the stuff that we did for complicated problems and it would just work. But … it could have never worked. Complex problems are very different in nature from complicated problems due the change in the causality chain. We can only see the causality in complex problems in hindsight. While this seems clear now, it did not seem so obvious back then. It was like trying to touch the rainbow. You fail one project and you learn your lesson, establish a new good practice that would prevent the same cause and effect chain to happen and then you try again. Unfortunately, the nature of complex problems would not in the least guarantee that such approach would work, causing endless frustration. Amusingly though, the Agile and TDD practices could have been used to help solve complex problems in their original, less institutionalized form, if the forces that make them work had been clearly understood. The current domesticated Agile and TDD rule books started to fail badly. And there is no question as to why.

So why did TDD fail? L e t ’s j u s t analyze what the main beef people seem to have with it. If you have full coverage using unit tests of your


TODAY SOFTWARE MAGAZINE software, then changing software behaviour will cause tests to fail. How many would fail? It depends on the change and on the battery of tests you have written. Note that in simple and complicated domains unit tests are good, since those domains are almost immune to change and solidifying your code base IS a good idea (though these tests should be viewed as more of a deterrent of change, rather than an enforcer of correctness). When the software is trying to solve a complex problem, especially one whose solution is not really clear at this point, that code needs to change a lot. It needs to make failure and experimentation cheap. Why would anyone use unit tests here? The only sensible answer is: because of best practices. Which obviously don’t apply to complex domains. TDD was not initially about unit testing. The emphasis, especially for a new born software, was first on functional testing (which in my book is behavioural testing). These tests are magic in the sense that they do not test units of code but rather behaviours across the units of code. I wonder how this would work for complex systems. • Setting up functional tests provides precisely the barriers that you need when developing software in the complex domain. Tests are sometimes called executable specifications. • Functional tests do not impede changes to the code base, but rather they encourage them. They make sure that your code respects the functionality that was agreed upon so far (ease of software change was on of the goals of TDD which now seems to have been long forgotten) • The test first approach of TDD provides an initial structure (barrier) to the software feature that you are trying to implement. The emergent nature of the Red / Green / Refactor cycles fits like a glove the “emerging practices” of solving complex problems. • The Refactor part of the TDD cycle is what usually gets postponed in most teams, hence we have ‘consolidation sprints’. This is a mistake. Your code should always be top shape and properly abstracted in complex scenarios. How can you make cheap experiments if your code is like a bowl of spaghetti?

use it as such and expect results in this domain. You need to practice TDD while being aware of the forces that led to its creation and of what your testing strategy wants to achieve. You should always have a testing strategy too. Furthermore, it is important to start loving the Red / Green and especially Refactor phases. This is the only reason why code will be easier to compose. It can never solve all the problems; if there is a major architectural change it will take time to implement, but with consistent refactoring; if there is a pattern to the changes or when that pattern emerges, your code will precisely mirror it and doing cheap exploratory coding will get a lot easier.

What about Agile practices?

From the very beginning, Agile practices were based on tested and testable code. If you have an agile process and untested code, then you are probably not very agile (see flaccid scrum), the project is rather new or is in the simple (or maybe complicated) domain. One of the recent complaints about agile practices sounded like ‘programmers should write code, not waste their time in senseless standup meetings’. This assumes that you apply the process to a simple or complicated problem, since writing code assumes that you know exactly what that code should do. This is not the case of the complex domain, which requires cheap failure and has the specification changing according to the success or failure of various experiments. A complex domain also requires you to structure your code in reusable pieces. How can you know that you do not write the same kind of functionality as one of your team mates, if you don’t talk to each other? In complex domains, communication should be strongly encouraged; getting the team together daily to communicate during a 20 minute standup rather than having them interrupt each other at random intervals throughout the day is simply an optimization. If standup meetings are held just because your practice requires it and you have a total disregard of the problem they try to solve, then yes, they are wasted time. Another fairly good point about why Agile fails complex domains is the sprint planning session when you need to estiIt surely does look like TDD would be mate the time that it takes to implement a perfect fit for our complex domain. But a feature. Since the nature of complex you cannot treat TDD like a rule book and domains is experimenting with code, this

seems to be a bit counter intuitive (you can’t accurately make predictions about features in this context). However, I think this is really not the point in practice. I rather believe sprint planning should have a somewhat accurate idea of where we are and where we plan to go next, and it does achieve that wonderfully.

Conclusions We can no longer afford to believe in the magic of sacred words like TDD or Agile. As the complexity of software grows, the whole software development team has to understand the forces that gave birth to TDD and Agile and how to apply them accordingly and with constant scrutiny. We have a model for complexity that explains why this is true and why we won’t be able to replicate our past successes in this current world, using the same tricks. Is this model accurate or not? We do not know very well yet, but the way that it has been received and used shows some interesting correlations, at the very least. Complacency in your success creates all kinds of problems for you and the industry as a whole. Don’t get too comfortable in your world and always try to learn something completely new, preferably something scary. Right now, complacency is about to get kicked out the door. If you want to join the new wave of programmers and programming paradigms, start learning now.

www.todaysoftmag.com | no. 31/january, 2015

25


programare

management

Boosting Agile in distributed teams

A

Tiberiu Cifor tiberiu.cifor@3pillarglobal.com Engineering Manager @ 3Pillar Global

s we all know, nowadays, one of the most widely used methods or manners of work in order to manage project teams is Agile. Agile can be successfully implemented by using Scrum (in my opinion, one of the most widespread approaches), Kanban or others. Everybody does Agile, everybody knows the Agile principles and everybody implements them. Due to the nature of my job, I have been through many projects, from the smallest ones to the biggest ones, from the easiest ones to some of the most difficult projects. In time, I have read a lot about Scrum, how it is implemented, or about its principles or best practices, etc. In all of these situations, I have found very little information on how Scrum can be optimized when working in distributed teams, located in different areas of the world, working on the same project and trying to implement these principles which have turned Scrum into one of the most popular methodologies. If we are talking about success, compared to the previous years, we can notice an ascending trend of the success ratio of the projects using Agile, but not only in their case can we see this trend. Below, there are the statistics of the last 2 years.

Projects using Agile (2012 vs. 2013):

Projects using V-model (2012 vs. 2013):

Projects using Waterfall (2012 vs. 2013):

26

no. 31/2015, www.todaysoftmag.com

Source: http://www.planittesting.co.nz/ resource/industry-stats-project-outcomesbased-on-primary-methodologies-2013/

Distributed teams – advantages and disadvantages

Most of the project teams I have worked in were distributed. I’ve encountered a situation when the client was located in the USA and the entire development team was located in Romania, but I have also seen more complex cases, when the development team was located in Romania but also in the USA, with the client. I think this is one of the most difficult situations to manage and I will later explain why. Let’s move on now to discuss a bit about the advantages and disadvantages of a distributed team. Let’s take an objective look on the advantages offered by a distributed team. First of all, the different time zone must not be seen as an impediment (which usually happens), but rather as an advantage. Why? For the simple reason that when one part of the team is not available, the other part is working and it can solve tasks, possible problems emerging in the production system and so on. In an overview on this situation, we can notice that, indeed, the difference of the time zone is an advantage, as it ensures the presence of support over a great part of the day. If they are working on a project that is already in production, this advantage may become even more important.


Educating the client to trust the team

not forget, we are talking here about distributed teams and within these teams, you will be surprised to see that each member has his own way of understanding the role he is playing within the team and especially his responsibilities. It is very important that in a distributed team these roles and responsibilities be clearly established right from the beginning. We know very well who the main players in a team that adopts the Scrum are. But is this situation always the ideal one? I will tell you, in most of the cases I have seen people involved in the development of a project who simply cannot identify with any of those actors we are used to in Scrum. Let’s take a practical example: a client comes and tells you he has a Business Analyst available, who will want to work with the team in all the steps of the product development. Instinctively, we, as managers, could come forth and try to analyze if this BA position fits anywhere, or if we can find solutions to optimize the Scrum. Moreover, I have seen cases when certain conflicts would arise because things are not like that; we have no BA in Scrum. Let’s see how this situation could be solved as efficiently as possible for everyone. I see things in a rather simple manner in this case, and usually my suggestions go towards turning that BA into a Product Owner, and if we already have a Product Owner, why not have the two work together? Let’s come back a little bit to the roles and responsibilities. Most of the times, in distributed teams, and especially when there are individuals involved in the entire development process, both from the client as well as from the developer, certain conflicts may appear. Why do I say this? Well, it is somewhat natural for some clients to try and place their people in some key positions, people who will want to be in charge of the development process and so on. Of course, conflicts emerge once again. It is not a bad thing for clients to wish to participate effectively in the development of their product, but we should try to approach these issues directly and make them understand that, for instance, as their partners, we may have more experience in developing a product and we should try to bring arguments as to why it would be better for them to let us coordinate some activities. Some clients, due to the fact that they are the clients, think Roles and responsibilities they have all the right to lead the entire development process. In time, I have encountered different forms in which a team Well, in order to avoid these unpleasant situations, it is very would function and I am referring here strictly to the roles and important to discuss clearly what the roles of all the people especially the responsibilities of each member of the team. Do involved in the process are and especially, it is very important to Usually, on most of the projects, people work with Scrum or, more recently, with Kanban. So far, everything is ok, but have you ever asked yourselves how well the client knows the Scrum methodology? It is a very important aspect, with a major impact in the future development of the project, having rather ample consequences. Firstly, some of the clients we are working with do not know very well what Scrum means, which its basic principles are and how exactly it is implemented. Moreover, if we are working with a big enough organization, we will notice that there are other roles and responsibilities. Surely, you have come across clients who have project managers, or the so-called Line Managers. The question is: how do these roles fit into Scrum? It is very important, when a new team is created and they start working on a new project, to define all the roles for all the individuals involved in the project, right from the beginning. Furthermore, my recommendation is to also draw up a short list of all the responsibilities for each of these roles. These responsibilities will shed a little light and will level some connections within the team, so that each of the people involved in the project know what they have to do. Good, we have established that it is very important to define the roles and responsibilities on the project, but, what can we do when we come across some reluctant clients, who are extremely difficult to persuade regarding these aspects? The answer to this question can be reduced to a single thing: TRUST. At this significant point, when the team is created, it is very important for the project manager, the one in charge of the entire development team, to begin playing his part. We were talking about roles, right? Well, which is the main role of a project manager in this phase? Building-up the team, adopting a common procedure for everybody and, most of all, developing their trust. Nobody tells us this: it is very difficult to succeed in building-up people’s trust in a short period of time, but it is not impossible. My recommendation in order to facilitate this thing is to have a “face 2 face” meeting with the client, to discuss all these aspects.

Our core competencies include:

Product Strategy

Product Development

Product Support

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

www.todaysoftmag.com | no. 31/january, 2015

27


management

Boosting Agile in distributed teams

establish a minimum set of responsibilities for each of these roles. Who is responsible for adding new tasks in the backlog? Who should facilitate the communication among the team members? Who is responsible for leading the technical discussions? etc. Once these roles and responsibilities have been established, chances are that things go very well within the team and, thus, you will have a happy client. This is what we all want, isn’t it?

Efficient communication

Well, we are getting to the communication part. Many of you may ask yourselves sometimes how we could communicate more efficiently within the team and not only. Communication plays one of the most important parts in a distributed team. If, in a team that is located in the same place, osmotic communication comes naturally, in the case of a distributed team, things are no longer that easy and we should focus on how we could overcome those barriers which will inevitably emerge during the development of a product. Some might say, well, ok, it is very good to communicate as often and as much as possible with the members of the team, since we can do no harm, right? Well, that’s totally wrong. Don’t forget, we are talking here about being efficient. If the team members begin to communicate excessively, using every opportunity to communicate anything that is related to the project, we are no longer efficient. An efficient communication involves a direct approach through which we are trying to solve any encountered situation as quickly as possible. This is efficient communication. I am sure you have encountered situations when you felt or heard from the others that “we are wasting too much time on conferences and meetings with the client”. When you hear something like that, you should know that, most probably, there is a communication problem. You will now come and say: well, we are spending time in meetings with some of the clients because certain requirements are not clear. You are right, it is quite a usual situation, but this doesn’t mean we have to waste the time of the entire team in long, endless meetings. Why not select 1-2 members of the team to participate in turns to such meetings? Are we a little more efficient? I would say yes. Or why not have in every sprint a person responsible with such a task, of clarifying some of the requirements? We could go on with other situations, but what I am trying to stress here is the fact that we can be efficient just by making small adjustments. In addition, before a major release or an important deadline, try to gather all the team in one place. What can be more efficient than having the entire team in the same place so that they can

communicate face to face? It is often quite difficult to do this, especially when we are talking about big teams. And we know very well that there can also be certain budget constraints. But, where this is possible, I strongly recommend that you try to gather the team in one place. If it is not possible to gather the entire team in a single location, try in all the conferences you have online with the rest of the members to turn your web cams on. Sometimes it helps a lot to see your interlocutor, to observe the body language and facial expression of the person you are talking to. By doing this, the discussions will become friendlier, more open, and you will notice for sure that they will become even more efficient. I would like to make one more suggestion: encourage everybody in the team to approach directly a person who can help. I have seen situations, even with some managers, who, out of the desire to have everything in control, demanded that the people in his team communicate their problems to him and he would solve them with the client or with other members of the team. Facilitate direct communication: you have a problem to solve and you know for sure that somebody from the client can help you out or someone from another location, then go directly to them. Moreover, encourage people to use direct communication whenever possible, namely telephone, skype or any other direct means of communication. E-mail should come as the last solution. You will be surprised with the results that can show up when communication is carried out using this approach.

Example of a burn down chart:

Example of a burn up chart

Source: http://www.agilemodeling.com/essays/communication.htm

28

no. 31/january, 2015 | www.todaysoftmag.com

I would also like to say a few words on “active listening”, “information radiators” and negotiation. All these things lead to


programare an efficient communication, and if we understand these concepts a little bit, we will be able to deal more easily with some tense situations or, at least, we will be able to bring more light upon a status required by a manager or a client. Information radiators normally represent any screen or chart that is visible at all times where the team works. Normally, if the team were in a single place, an information radiator could be a board in the office, for everyone to see. What does this board look like when we are working in distributed teams? Well, it is very simple, this board is transposed into the online environment and it is given by several types of reports which are generated automatically by that instrument used by the team to measure progress, for example JIRA. In this case, the information radiators can be: a burn down chart, burn up chart, cumulative flow diagram, velocity chart. These types of graphics help the entire team, no matter where it is located, to see at any moment what the progress of a sprint is, for example. Furthermore, you know too well the questions of the type: “how are we doing with this sprint?” or “are we on track with the sprint?”. Well the answer to these questions can be seen by anyone who is looking at one of these reports, which are very nicely represented from a graphical point of view. I was mentioning the “Active listening” somewhere above. In distributed teams, this concept plays a very important role. Normally, when we are listening to somebody, we are doing it in a passive way. In the Agile methodology, this active way of listening refers to the fact that everybody is involved actively in a discussion, meaning that everybody has to ask questions and give answers. Do not picture a chaotic thing, where everybody is talking. No! Actually, this active manner of listening and being involved in a discussion has to follow certain rules. We are talking here about 3 levels when it comes to Active listening: • Internal Listening – the participants just pay attention and usually ask themselves whether the discussed matters affect them or not. • Focused Listening – the participants show they are interested in the discussion and try to place themselves in the shoes of the speaker. • Global Listening – the participants show involvement, ask questions, are critical about some of the arguments, are full of respect and usually show all these things through gestures and emotions.

TODAY SOFTWARE MAGAZINE Conclusion

This is a very wide topic and maybe it requires a lot of debate. What I have tried to do was to put down some of the real problems that maybe many of you deal with when working in distributed teams. We should take into consideration, at all times, several aspects in order to facilitate this manner of interaction and especially to optimize some of the already known procedures which do not refer to this very important aspect: working in distributed teams. The past few years have shown us that more and more companies are opting for teams made of members from different corners of the world, out of different reasons. This represents a challenge both for the managers and for the actual team. The way in which people manage to interact, the way in which they succeed in overcoming certain obstacles and especially the value they manage to bring within a development team, turn the entire team into a work model. Projects evolve, methodologies evolve, the world around us is in a continuous movement, and this thing can also be felt in the mechanisms that turn a simple team into a successful one.

All of the above mentioned things, through communication eventually lead to negotiation. Negotiation is extremely important in Agile, as you well know. I think I ought to mention it here, for the time being, although in my opinion negotiation deserves a special and dedicated article. There are many interesting things about negotiation, but the most important aspect that I would like to mention is that negotiation has to be based on respect, understanding of the interlocutor and it is necessary for you to bring valid arguments that can draw the attention of the person you are talking to on a given topic. There is one more thing I would like to add here, namely: you should, as managers, try to integrate in the team, and I mean try not to be seen as some bosses. Most of the times, this would only place more barriers in the way of an open and, most of all, efficient communication.

www.todaysoftmag.com | no. 31/january, 2015

29


programming

5 Tips for Useful Scrum Code Reviews

E

very week, we, at Mozaic Works, in the product development team discover 2-3 potential bugs in the product we are developing during our code review sessions. This happens despite a very structured way of work and despite applying ATDD and Test First / TDD.

Alexandru Bolboacă

alex.bolboaca@mozaicworks.com Agile Coach and Trainer, with a focus on technical practices @Mozaic Works

30

no. 31/2015, www.todaysoftmag.com

Yet developers and technical leads complain to us in the community or during coaching sessions and workshops about various aspects of code reviews. Here are some tips to make your Scrum Code Reviews more useful.

code samples and sets of guidelines to follow. These guidelines can relate to security (e.g. “always encrypt data stored by the patient file module”), testing (e.g. “use dependency injection to allow testability”), performance (e.g. “use a profiler to measure the execution time for the queries Tip #1: Discuss Cosmetic Guidelines in the reporting module”) etc. These are Exactly Once the first guidelines; they are specific, conThere are two main types of guidelines: textual and help avoid common mistakes. • Cosmetic: indentation, spaces vs. And this takes us to… tabs, naming policies, curly braces positioning etc. Tip #3: Improve the Guidelines based • Technical: how to write code to on Your Past Mistakes avoid common mistakes We remembered the days when the team leader would give us on our first day Programmers will debate the cos- in a project a 20 pages guidelines documetic guidelines for hours and hours. ment. Reading it took a lot of time, and we The proof is on google, but we wouldn’t quickly forgot its contents. When I became search there if we were you and had a lot a technical lead, I decided to improve the of work to do. Cosmetic guidelines should approach. Here’s how. be discussed exactly once and encoded Technical guidelines are typically in IDE settings. We can live with writing based on “best practices”; the trouble with curly braces on the line after the function best practices is that not all best pracname; that’s not a debate worth having. tices apply to all products. I favour the However, code consistency is important, name “rules that help avoid common and any Scrum team should follow com- mistakes”. How do you know which are mon cosmetic guidelines. the common mistakes? Easy, you need to make them first. Some of you will probably Tip #2: Start with Minimal Guidelines frown at this thought, but if you’re honest Derived from Architecture with yourself you’ll realize that you make Large guidelines documents are preva- a lot of mistakes. I make a lot of mistakes, lent when working as a junior developer. but my colleagues keep me honest. How Typically, the technical leaders would copy about taking advantage of the mistakes the Microsoft guidelines or point the deve- you make? lopers to them. While this helps learning Here’s how this virtuous cycle works: how to write better code, it is impossible Write coding guidelines -> Perform code to remember all of them. reviews -> Analyze mistakes -> Improve There’s a simpler way. Good architec- guidelines ture results into a thorough risk analysis,


programare Tip #4: Use Different Types of Code Reviews

There are three prevalent types of code reviews in the teams I worked with: over-the-shoulder, using a tool and pair programming. Each of them has advantages and disadvantages and that’s why it’s worth combining them. The purpose of code reviews is to avoid mistakes (or, as we typically call them in the industry, bugs). A side effect is learning by reading and discussing someone else’s code. The first difficulty with Scrum code reviews is to build the discipline. If you’re just starting with Scrum code reviews, you need to set up triggers. For example: once a week or when a story is done. Once you got into the habit of performing Scrum code reviews, the strategies can vary. We have a complicated schedule, and we have to vary the types of reviews: • Over the shoulder – I sometimes go near my colleague Claudia and look at the code she’s writing. She does the same; in fact I’m never shy to ask for her help when working on a task. • Scheduled – every week, we have a one hour code review session • Pair programming – when there’s a more complicated task or we decide to try a new way of doing things • Random – about once a month, I take a look at random parts of the code to see if they’re better ways to write them We don’t use a tool for code reviews. We look at code, discuss, decide on improvements to make and apply them as soon as possible. How does this scale? We propose the following strategy for Scrum code reviews: • Pair program on complex user stories. If the story was completely developed using pair programming, consider it already reviewed. • When a story is done, review with a colleague. It’s not necessary that the colleague is more experienced. Review against the guidelines and against readability, simplicity, changeability, and security. Write down the problems you found and give them to the Scrum Master. • Once per sprint, schedule a 30’

TODAY SOFTWARE MAGAZINE session with the team where you collec- Summary tively review some of the code written To have more useful Scrum code reviduring the sprint. Give the results of the ews, follow these rules: review to the Scrum Master. • Discuss cosmetic guidelines exactly • At least once every 4 sprints have a once, and encode them in the IDE skilled developer take a look at random settings parts of the code for 30-60’. The conclu• Start with a minimal coding guisions should go (no surprise here) to the delines document that results from Scrum Master. architecture risks (e.g. security) • Discuss the findings at the retrospec• Perform code reviews and write tive. The Scrum Master should decide down the problems you find on a schedule, based on the number of • Analyze the problems at retrospecidentified issues. The retrospective shotives and improve guidelines based on uld result into a guidelines update. past mistakes • Use different strategies for code This strategy takes advantage of the reviews: over the shoulder, pair probrain power of all developers from the gramming, scheduled, random, through team, not only of the technical leads. a tool • Trust your colleagues

Tip #5: Trust Your Colleagues

People who were technical leads before the Scrum transition typically turn into gatekeepers afterwards. A typical issue is that they want to do all the Scrum code reviews to make sure the quality is at a high level. While their purpose is good, the implementation is not helping the team. A technical lead that insists on validating the code will become a bottleneck very soon. The developers from the team will have little motivation in taking responsibility for the code they’re writing if they know someone guards it. Also, the technical lead risks to get separated from the team because he’s obviously not equal with the others. Turn this the other way around and you get a functional, productive team that’s learning together. A former technical lead is coaching and helping everyone else grow by pair programming with them, helping with difficult tasks or delivering small training sessions. Developers take responsibility for their code because they review each other’s code. Everyone has equal roles, but everyone contributes to the team with the best they have to offer: junior developers with their skills and time, senior developers with smart solutions and technical leads with growing everyone else from the team. Trusting your colleagues will get your team to a more effective work style.

www.todaysoftmag.com | no. 31/january, 2015

31


programare

programming

Converging Documentation in a multi-module software project: a Build Automation based approach

S

oftware specification documents serve as reference manuals for user interface designers, programmers who write the code and testers who verify that the software works as intended. In a multi-module application, each component is developed and released individually. Keeping the documents up to date for each component is not easy because it is not only about writing, but also about centralizing all the documents, so they can be easily found by the interested people. Alexandru Albu

alexandru.albu@isdc.eu Senior Developer @ ISDC

Context

The goal of this article is to present an approach that simplifies this process, relying on Build Automation1, to extract and publish the documents. We will take you through the configuration process of Apache Maven 2 artefacts Jenkins CI3server, and eventually, the creation of a generated project website that reflects the current status of the project, offering access to all available documents from a single place. Our case study is a framework written in Java programming language, consisting of more Maven modules. Each module contains one or more documents written in Markdown4 . The documents are written by developers and they are readmes5, howtos, technical documentations6 and others.

What do we want to achieve?

Before stepping into the process, we should first imagine how the end result will be like. We want a front html page that presents the latest documents written by developers of the released modules. It is a common practice to release JavaDoc, release notes and other documents together with the software product. They actually accompany the software

product, so the product can be used and understood by the consumers. By releasing the artifacts with Maven the compiled files end up in a structured Repository. You can release not only the compiled artifacts, but also the JavaDoc, Cross-Reference of source code, or a whole site presenting information about the project dependencies, issue tracking, continuous integration, team and many others. The project site is the most important element in achieving our goal because our aggregating website will link the websites of each individual module into a single html page. Imagine the html page containing the followings: <div> <h1>module1</h1> <a href=”modules/module1/index. html”>About</a> <a href=”modules/module1/Readme.html”>Readme</a> <a href=”modules/module1/ReleaseNotes.html”>Release Notes</a> </div> <div> <h1>module2</h1> <a href=”modules/module2/index. html”>About</a> <a href=”modules/module2/Readme.html”>Readme</a> <a href=”modules/module2/ReleaseNotes.html”>Release Notes</a> </div>

1 http://en.wikipedia.org/wiki/Build_automation 2 http://maven.apache.org/ 3 http://jenkins-ci.org/ 4 http://en.wikipedia.org/wiki/Markdown 5 http://en.wikipedia.org/wiki/README

6 h t t p : / / e n . w i k i p e d i a . o r g / w i k i / Technical_documentation

32

no. 31/2015, www.todaysoftmag.com

This requires the presence of the referenced files on the disk, generated during the process. • index.html (our page) • [modules] • [module1] • index.html


programare • Readme.html • ReleaseNotes.html • [module2] • index.html • Readme.html • ReleseNotes.html

Approach

Our modules are Maven artifacts. The approach relies on Maven Build Lifecycle7.

Generate the module site

We generate the site by using maven-site-plugin8With this plugin alone, we can only generate the default bits, such as Project Summary, Project Plugins, Dependencies, and others. With some configuration and help, we can intervene in the plugin›s normal flow and include our Markdown files as html documents referenced from the generated site. In order to transform the Markdown files into html, we need the doxia-module-markdown as dependency for this plugin. With this in place, the site generation process looks inside src/ site/markdown and converts each .md file into html. This might sound easy, but if our *.md files have images, for example, they are simply ignored by the plugin. The site plugin only copies the contents of src/site/resources. But we want our Markdown files to be accessible also offline, so the developers can always have a look. By referencing the images from src/site/resources will only work offline, because after the site is generated, the resources folder will not be present, so we will end up with broken links. What we would really want is to have our Markdown files in src/site/resources, referring images from src/site/resources/ images, so simply from images (as relative directory), because after site generation, the contents of images is merged with other images that the site generates in the target/site/images directory. In conclusion, the module›s src directory has the following structure: • [src] • [main] – source code • [site] • site.xml • [resources] • Readme.md • ReleaseNotes.md • [images] • image1.jpg • image2.jpg Inside the Readme.md we find a reference to images: ![Alternative text] (images/image1.jpg „Text descriptiv”)

After the site is generated, we expect to find our files in the target directory as following: • [target] • [site] • Readme.html • ReleaseNotes.html 7 http://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html 8 http://maven.apache.org/plugins/maven-site-plugin/

TODAY SOFTWARE MAGAZINE • [images] • image1.jpg • image2.jpg The 2 html files should be referenced in the generated site html files. In order to make this happen, we need a way to tell Maven the followings: 1. Before you generate the site, please copy all *.md files from src/site/resources into src/site/markdown. Be careful, the new directory does not exist. 2. Generate the site for the project, but in the end result I would like to have references to my Markdown based documents. I know I have a Readme.md and a ReleaseNotes.md, so you should link to Readme.html and ReleaseNotes.html. 3. When you finish the site, delete the src/site/markdown directory. I want to keep my source code clean, without duplicates. We start with the second point, by describing in src/site/site. xml the links to our future html files: <?xml version=”1.0” encoding=”UTF-8”?> <project xmlns=”http://maven.apache.org/DECORATION/1.4.0” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven. apache.org/DECORATION/1.4.0 http://maven.apache.org/ xsd/decoration-1.4.0.xsd http://maven.apache.org/ DECORATION/1.4.0 „> <!-- skin, banners and other site configs --> <body> <!—- menu entry for developer documents --> <menu name=”Developer documents”> <item name=”Readme” href=”Readme.html”/> <item name=”ReleaseNotes” href=”ReleaseNotes.html”/> </menu> <!-- menu entry for javadoc, jxr, and others --> <menu ref=”reports”/> </body> </project>

The plugin reads the instructions from the above file and generates the site accordingly. We added a new menu entry containing our documents. First and last points involve file manipulation, and this is the perfect job for an Ant task. In Maven, we can use the mavenantrun-plugin, and we configure it with 2 executions: • first execution creates the src/site/markdown directory and copies all the *.md files in it. This must be done before the site starts to generate, so in pre-site phase; • the second execution removes the src/site/markdown directory, and it must be done after the site is generated, so after the site phase. The resulting pom.xml is presented below:

<project .. > <dependencies> ... </dependencies> <build> <!—- prepares Markdown files for maven site --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>pre-markdown</id> <phase>pre-site</phase> <configuration> <tasks> <delete dir=”${project.basedir}/ src/site/markdown” /> <mkdir dir=”${project.basedir}/ www.todaysoftmag.com | no. 31/january, 2015

33


programming

Converging Documentation in a multi-module software project src/site/markdown” />

<copy todir=”${project.basedir}/ src/site/markdown”> <fileset dir=”${project.basedir}/ src/site/resources” includes=”**/*.md” /> </copy> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> <execution> <id>post-markdown</id> <phase>site</phase> <configuration> <tasks> <delete dir=”${project.basedir}/ src/site/markdown” /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <!—Site generator is related to the reporting section --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.3</version> <dependencies> <dependency> <groupId>org.apache.maven.doxia</groupId> <artifactId>doxia-module-markdown </artifactId> <version>1.5</version> </dependency> </dependencies> </plugin> </build> <reporting> <outputDirectory>${project.build.directory}/site</outputDirectory> <!—config other reporting related things, such as maven-javadoc-plugin, maven-jxr-plugin --> </reporting> </project>

By calling mvn site, it will generate the desired site into the target/site directory.

Deploy the module site

files are taken into consideration even when the target is empty or not present, we should call mvn site site:jar. The result is a new jar file, target/module1-site.jar. All we need to do now to consider this step complete is to upload this jar file into Maven Repository. It can be done by using the Maven Deploy Plugin9.

The resources project

The purpose of this project is to aggregate all the available resources into a single website. Besides the modules documentation, it can also hold general documents, like first steps for developers or project technical overviews. For those, the maven-site-plugin can be applied in the same manner as for the modules. In order to download the generated sites we use Maven Dependency Plugin. It helps us retrieve the artifacts and the *-site. jar deployed at previous step. Our goal is to expand all the site archives into target/site/modules, so we can maintain the desired website structure. To get the *-site archives for the modules, all must be declared as dependencies of the resources project in pom.xml: <project .. > <dependencies> <!-- module 1 --> <!-- module 2 --> <!-... --> <dependencies> <build> <plugins> <!-- antrun to generate additional html from markdown --> <!-- (!) --> <!-- groovy plugin to perform I/O operations on disk, explained later --> <!-- (!) --> <!-- site plugin to generate site from the current project --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin </artifactId> <executions> <execution> <id>sites-modules</id> <phase>compile</phase> <goals> <goal>unpack-dependencies</goal> </goals> <configuration> <classifier>site</classifier>

The next phase in achieving our goal is to wrap the generated site into a jar file and upload it into a Maven Repository. The site plugin knows how to create a jar file from the files inside target/site directory. All we need to do is to call mvn site:jar, <!—this is important, here are enumerated with one remark: the pre-site phase gets executed only if we all artifacts that are project libraries, separated invoke mvn site, without :jar goal. To make sure the Markdown 9 http://maven.apache.org/plugins/maven-deploy-plugin/

34

no. 31/january, 2015 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE by comma (,) --> <includeArtifactIds> module1, module2, ...</includeArtifactIds> <failOnMissingClassifierArtifact>false </failOnMissingClassifierArtifact> <outputDirectory> ${project.build.directory}/site/modules </outputDirectory> <useSubDirectoryPerArtifact>true </useSubDirectoryPerArtifact> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>

This plugin will expand all modules sites contents into separate directories inside the target/site/modules of the resources project. The last important bit here is to design the index.html in such a way that it links to the sub-sites of the modules. Because our modules have versions, we want our website project to figure out itself the paths of the sub-sites. By making the index page dynamic, we can simply add a script that populates the page with the corresponding content, by declaring an array in a separate. js file, as presented below:

File modFile = new File( „${project.build.directory}/site/config/modules.js”); BufferedWriter modWriter = new BufferedWriter( new FileWriter(modFile)); modWriter.writeLine(„var modules = [„); new File( „${project.build.directory}/site/modules”).eachDir() { dir -> modWriter.writeLine(„’” + dir.getName() + „’,”); } modWriter.writeLine(„];”); modWriter.close(); ]]>

...

</source> </configuration> </execution> </executions> </plugin>

By invoking mvn site site:jar on resources project, we obtain the archive of the desired website. The archive can be easily expanded into a HTTP Web Server and made available to everyone interested.

Conclusions

By having all the modules configured and the resources project created, all mvn commands can be ran by Jenkins CI easily, var modules = [ „module1-1.3-SNAPSHOT-site-jar”, and the final website can be deployed on the HTTP Web Server as „module2-1.5-site-jar”, post build step. Every time a module is released, its sub-website is ... ]; published and the main website can be regenerated. This way we ensure that the latest documentation is available for developers The JavaScript code can use the modules array and insert the without any additional effort or intervention and it’s done in the following DOM elements into the index page: spirit of Continuous Integration. <div> <h1>module1</h1> <h2>Version 1.3-SNAPSHOT</h2> <a href=”modules/module1-1.3-SNAPSHOT-site-jar/ index.html”>About</a> <a href=”modules/module1-1.3-SNAPSHOT-site-jar/ Readme.html”>Readme</a> <a href=”modules/module1-1.3-SNAPSHOT-site-jar/ ReleaseNotes.html”>Release Notes</a> </div> <div> <h1>module2</h1> <h2>Version 1.5</h2> <a href=”modules/module2-1.5-site-jar/ index.html”>About</a> <a href=”modules/module2-1.5-site-jar/ Readme.html”>Readme</a> <a href=”modules/module2-1.5-site-jar/ ReleaseNotes.html”>Release Notes</a> </div>

Our modules.js file is populated during the build of resources project with help from groovy-maven-plugin. The goal is to execute a code that lists the directories inside the target/site/ modules and print the names in/site/config/modules.js, so we can obtain our array of module paths. The code is listed below: ... <plugin>

<!—writes in config/modules.js the names of the corresponding directories --> <groupId>org.codehaus.mojo</groupId> <artifactId>groovy-maven-plugin</artifactId> <version>1.5</version> <executions> <execution> <phase>package</phase> <goals> <goal>execute</goal> </goals> <configuration> <source> <![CDATA[ println(„==== Creating config/modules.js ====”);

www.todaysoftmag.com | no. 31/january, 2015

35


programare

programming

Large Scale Text Classification

I

n the past years, artificial intelligence has become the answer to many problems, such as detecting fraud and spam messages, classifying images, determining the topic of an article etc. With the rise in the number of internet users, the quantity of data that needs to be processed has also increased. Therefore, storing and processing data on one server has become too difficult, the best solution available being processing it within a distributed system.

Cristian Raț

Cristian.Rat@Yardi.Com Software Developer @ Yardi

36

no. 31/2015, www.todaysoftmag.com

Classification is a decision-making process, based on previous examples of correct decisions, and is therefore a supervised learning technique because, in order to make its own decisions, it needs a preclassified set of data. The ’training’ process will result in discovering which property of the data indicates its belonging to a certain class, and the relationship between the property and its class will be saved within a model that will be used for sorting new data. The process for creating an automated classification system is the same, no matter the classification algorithm being used and it comprises more stages: data processing, training, testing and adjusting, validation (final testing), deployment in production. What the classification of a text actually means is the association between a text and and a pre-defined text category. Each word from the document is seen as an attribute of that document; therefore, a vector of attributes is being created for each document, using the words from the text. In order to improve the classification process, we can either eliminate certain words, especially the words that are very frequent in a vocabulary (e.g: and, for, a, an), or group words in 2-3 word clusters called n-grams. More algorithms that deal with this problem have been developed: Naive Bayes, decision trees, neural networks, logistic regressions etc. One of the most used text classification algorithms is Naive Bayes. Nayve Bayes is

a probabilistic classification algorithm, its decisions being based on probabilities that derive from the pre-classified set of data. The training process analyzes the relationships between the words that appear in the documents and the categories that the documents are associated with. Bayes’ theorem is then used to determine the probability that a series of words might belong to a certain category. Bayes’ theory states that the probability for an event A to appear given that another event B already appeared is equal to the probability that the event B will appear given that the event A appeared times the probability that event A will appear divided by the probability that event B will appear.

Therefore, in the case of our problem, the probability that a certain document belongs to a C category given that it contains the word vector X=(x1, x2, …, xn) is the following:

This equation is extremely difficult to calculate and it requires a tremendous computational power. In order to simplify the problem, the Nayve Bayes algorithm assumes that the attributes from the vector X are independent (this is why the


TODAY SOFTWARE MAGAZINE algorithm is called ”nayve”), thus making the formula look more Transforming the text into vectors of attributes is done by like this: using the seq2sparse command, which creates a SequenceFile file of the <Text,VectorWritable> type: mahout seq2sparse -i sequencefile -o vectors -wt tfidf

This would be processed much faster but, if we have a big number of classes and a big amount of data, the training process will take too long for the algorithm to have any practical use. The solution is distributed processing. Hadoop is a program that allows for processing data in a distributed manner. It comes with a distributed file system for storing data and it can scale up to several thousand computers, which should be enough to solve any classification problem. Mahout is a library that contains scalable algorithms for clustering, classification or collaborative filtering. A big number of these algorithms implement the map-reduce paradigm and run in a distributed manner using Hadoop. The processing speed of the algorithms from this library is small compared to other options when the size of the data is small, so it doesn’t make sense to use Mahout; but Mahout can scale massively and the size of the data is not a problem in this case; this is why the recommendation is that we use it only when the amount of data is very big (more than 1 million documents used for training). To facilitate algorithm usage, Mahout has a tool that can be executed from the command line, helping to simplify the creation of a scalable process of automated text classification. The first step is to transform the text into attributes vectors, but for this to be possible we need to have access to the documents in a SequenceFile format. A SequenceFile is a a file that contains key-value pairs and is used with the programs that implement the map-reduce paradigm. To transform our files into SequenceFile we use the command seqdirectory:

Dividing the data into test data and training data is as easy as the previous steps. The division is random; therefore we can select the percentage represented by test data from the total amount of data using the parameter ‚--randomSelectionPct’: mahout split -i vectors --trainingOutput train-vectors --testOutput test-vectors --randomSelectionPct 40 -ow –sequenceFiles

The training and test processes are carried out using the following commands: mahout trainnb -i train-vectors -el -o model -li labelindex -ow mahout testnb -i test-vectors -m model -l labelindex -ow -o rezultate_test

The results of the tests can at this point be evaluated and, if it is necessary, in order to obtain a better result, we can modify the initial data and we can repeat the process until the accuracy of the classification is satisfactory. The classification of the text can have multiple applications and at this point the size of the data no longer represents a limitation. Hadoop and Mahout bring the necessary instruments for creating a classification process that can work with millions of documents, process which would have been extremely hard to do otherwise.

mahout seqdirectory -i initial_data -o sequencefile

Objective C

jobs-cluj@yardi.com Yardi Romania

www.todaysoftmag.com | no. 31/january, 2015

37


programare

programming

Internet of Things in the Java Universe

T

he IT experts called 2014 „the Internet of Things year”, as it was one of the hottest topics of the year that just ended. The assigned title is not surprising at all if we consider that important websites such as dzone.com, jaxenter.com or oracle.com published several articles every week about the technologies that are part of the Internet of Things and the bloggers didn’t waste any chance to write about their latest IoT projects. Publishing houses were vigilant as well, dozens of titles being published in 2014, many more waiting to hit the printing press this year. All these things happened while a whole bunch of new gadgets and intelligent devices were released and many software platforms and implementations of well known or lesser known protocols were released as well. Many technology passionate people have heard of popular products, launched during the past few years, such as Philips Hue or Nest, but starting with 2014 one would need to put some effort into keeping up with the pace the new devices, such as Sen.se Mother, Fitbit Charge or SkyBell are released at. IoT influences more and more the areas of everyday life, such as health, with devices that monitor the patients, home nursing, using gadgets for a healthy lifestyle, transportation, with connected vehicles, home automation, industry etc. Before we start talking about the ways the Java community can have its voice heard in the Internet of Things space, we owe the reader a short description of what exactly is IoT.

Definitions

The Internet of Things or shortly IoT is a concept discussed more and more during the past few years, but what it means is not always well understood. CASAGRAS (Coordination and support action for global RFID-related activities and standardisation) gives us a rather definition which says about IoT the following: it is “a global network infrastructure, linking physical and virtual objects through the

38

exploitation of data capture and communication capabilities. This infrastructure includes existing and evolving Internet and network developments. It will offer specific object-identification, sensor and connection capability as the basis for the development of independent cooperative services and applications. These will be characterised by a high degree of autonomous data capture, event transfer, network connectivity and interoperability.” [1] The definition offered by Stephen Haller of SAP helps us see a clearer image of the Internet of Things, which is seen as “a world where physical objects are seamlessly integrated into the information network, and where the physical objects can become active participants in business processes. Services are available to interact with these ‚smart objects‘ over the Internet, query and change their state and any information associated with them, taking into account security and privacy issues.” [2] Another explanation is given by Oracle, who states that “the Internet of Things is about collecting and managing the massive amounts of data from a rapidly growing network of devices and sensors, processing that data, and then sharing it with other connected things.” [3] To better understand the dimension of these data volumes we can take a look at the Oracle Team USA navigation team, who works with boats equipped each with 300 sensors, intended to give information about a multitude of parameters, such as strain on

no. 31/january, 2015 | www.todaysoftmag.com

the mast, effectiveness of sail adjustments, even the strength and stability of the hull. These sensors measure 3000 variables 10 times every second, producing 500 GB of raw data a day. Another interesting aspect is the fact that today only 11% of the total volume of data is generated by devices, but IDC estimates that by 2020 the percentage will raise to 40%.[4] This descriptive title which tries to catch the essence of the next big trend in IT is actually the effort to rethink our relationship with the objects we use every day and also the relationship of the objects one with each other. According to the experts, IT will strongly pursue this direction. Proof to this statement is the fact that several big names of the 21st century’s technology, such as Cisco or Bosch, performed studies on this subject, concluding that the IoT projects will exceed the economical value of $15 trillion annually by 2020[5]. Also the analysts from Cisco state that in 2010 there were over 12.5 billion objects connected to the Internet and estimate that there will be approximately 25 billion of intelligent things connected to the Internet by the end of 2015. For the year of 2020 their prediction is that of 50 billion things connected [6].

Oracle and Java Embedded

In this context enters the stage Java, both as platform based on the Java Virtual Machine and as programming language, having a community of over 9 million


programare users. We make this distinction between platform and language [7] because a device running a JVM is not limited to execute Java applications only; these may be, in certain conditions, applications written using Scala, Clojure etc. In the past, the embedded devices’ programming was done main ly in low level languages such as C or assembly. In t h is ar t i cle we will try to create an opinion about the IoT solutions proposed by Oracle, a company whose Java platform implementation benefits from the highest level of popularity among programmers. However in future articles we will look closely the features of other Java projects for IoT, such as those belonging to the stack developed by Eclipse Foundation. During the past few years Oracle was massively invested into a series of products called sugestivelly Java Embedded, which offers the Java programmers the possibility to write applications for such devices, from smart cards and wireless modules to single board computers (SBC), such as Raspberry PI. The Java Embedded platforms are the main thing Oracle offers to the embedded developers and through which it contributes to the Internet of Things space. Oracle’s vision for Java 8 was to release, besides the Standard Edition (SE), two other important variants of the platform, precisely the Oracle ME Embedded 8 and Oracle Java SE Embedded 8, adding the Java Embedded Suite to the list. Henrik Ståhl, vicepresident over product

TODAY SOFTWARE MAGAZINE management for Java and IoT at Oracle, states in the November/December 2014 of the Oracle Java Magazine that by launching these variants of the platform they have made „available the features the programmers are used to in Java SE, on the embedded platforms also” [8]. With the release of the new version of the distributions mentioned above, Oracle has tried to bring the two platforms to the highest level of compatibility and, at the same time, to align them with Java SE 8. To accomplish this, they introduced the concept of Compact Profiles, the developers being able to choose between the complete set of Java SE APIs and three other subsets which only have available those APIs that are necessary for the relevant use cases. Such a use case could be running an OSGi stack (Open Service Gateway initiative), As we’ll see in a future article, OSGi plays an important part when it comes to the efforts made by the Eclipse Foundation for implementing a complete IoT stack, called Open IoT stack for Java. Starting with Java version 8 which was released in the first part of 2014 we saw that Oracle has put considerable effort into bringing the Micro Edition (ME) variant of the platform up to date, confirming the commitment of the corporation in the IoT solutions war. Among the enhancements made on the platform there is the increased efficiency of the deployment process on small devices, such as intelligent sensors or the embedded gateways. Also the APIs have been updated to answer the programming needs for the target devices. Through this new Java ME version the platform has more things in common with Java SE, but there are some features, such as reflection or lambda expressions that still have to be added. This is an important

Figure 1 Overview of the Oracle Java ME Embedded 8 platform[9]

aspect, because this way, any Java programmer will be able to get involved in embedded projects in a short time, without much effort. Henrik Ståhl announces that in 2015 some hardware manufacturers plan to integrate Java ME in their devices, which will lead to a higher adoption rate of the platform. The components built using the embedded solutions we’re talking about, deployed in an IoT context, give business applications access to resources installed in the surrounding environment, both to receive input from them and to issue commands. Such a use case is the orchestration of heterogeneous temperature and lightning control systems in a building. We notice that the ability of Java ME to control equipment such as sensors, valves or servo-motors is a fundamental aspect for building an IoT infrastructure. Looking at it from afar we can notice that the Java architecture allows us to create vertical applications, as Maulin Patel, the leader in embedded processing solutions at Freescale showed [8]. First we collect data from intelligent objects with Java ME, then we switch to Java SE for gateway services, and in the end we execute the data processing and handling with Java EE in the cloud. Almost every time there is a talk about IoT the security problem is brought up. In an open and heterogeneous environment as the Internet of Things infrastructure is, security is essential but hard to get. It is enough that an attacker has to one of the IoT solution’s components to be able to exploit the possible flaws in its defensive system. For instance, when it comes to the intelligent meters the person that would profit most from compromising these units could be the owner of the building where they were installed. Thus the potential attacker has even physical access to the equipment, a problem which raises questions concerning the levels where the security mechanisms should be implemented. Now that we have noticed the seriousness of this matter, we can announce that there is good news for the IoT developers in the Java universe, because this platform guarantees the data security on the entire verticality of the implemented system. Security is a feature embedded into the platform’s architecture, being developed and updated constantly, with each new version. We will give more details concerning the security model offered by Java ME Embedded 8 in one of the next paragraphs when we will talk about a

www.todaysoftmag.com | no. 31/january, 2015

39


programming few technical features of the platform.

Java ME Embedded 8

The platform dedicated to the devices having the least resources, for instance cards, is called suggestively Java Card. However, the first product in the Oracle family that brings an important influence in the IoT space is Java ME Embedded 8. Consequently, in the following paragraphs we will focus on this product. Before we look at some of its details, we have to mention that Java ME Embedded comprises two versions: Java ME Embedded and Java ME Embedded Client. Java ME Embedded 8 is a platform that can be used by devices that have less than 1 MB of memory. Thus it is fit for intelligent object that don’t possess graphical interface, with long “on” time and limited resources. As we can see in Figure 1, the foundation for Java ME Embedded 8 is the virtual machine, found here as Connected Limited Device Configuration or in short CLDC 8. This component represents a subset of Java SE 8, dedicated to embedded devices. As we mentioned above, along with version 8, CLDC means a first step towards a better alignment with Java Standard Edition and also an important leap from CLDC 1.1.1. Consequently, we have annotations, generics and many other Java features, familiar to all developers. Although there has been an important evolution with the release of CLDC 8, the binary compatibility with the previous version hasn’t been forsaken. Above the foundation represented by CLDC 8 lay several essential components for the platform. One of them is Generic Connection Framework (GCF 8). As one can guess this component handles the connectivity problems. This framework is necessary as in the embedded space the connectivity possibilities are numerous and the devices where our application runs have various communication interfaces with the outer world. Some may have Wi-Fi connectivity capacity, others cellular type, Bluetooth or cable. Also for an optimized connectivity control GCF 8 exposes the AccessPoint API. In addition GCF 8 comes with support for IPv6, freeing the Java ME Embedded 8 developers from the fear of IPv4 addresses exhaustion. Back to the subject of security in the Internet of Things world, we can offer a few details about the way GCF 8 solves this problem. Java ME Embedded 8 comes equipped with implementations of the

40

testare Internet of Things in the Java Universe latest security standards, among whom we can count Transport Layer Security 1.2 and Datagram Transport Layer Security 1.2. Thus, Oracle assures the users of the platform that it offers “the higher network and authentication encryption levels” [9]. Another building block for Java ME Embedded 8 is the Micro Edition Embedded Profile 8 (MEEP 8). This component is responsible with defining the model, the container where the application runs, in general with its lifecycle. Through MEEP 8 we can share code among applications; we can update components in the system or apply patches to the application. Sharing libraries – called suggestively LIBlets – is done through MEEP 8 also, and it contributes to the minimizing of memory need and applications modularization. In addition, MEEP 8 offers the applications the possibility to communicate one with another either synchronously (Inter-MIDlet Communication or IMC) or asynchronously through a messaging system based on events. Security is an important topic for MEEP 8 too, because one can define security policies for authentication and authorization based on the situation at hand. Thus, the code is loaded and executed in a secure environment, as every component is associated to a client and having specific permissions. These have to be verified at every access attempt. A crucial component for Java ME Embedded 8 is the Device Access API which offers the applications access to the peripheral devices such as the sensors, switches or LEDs. This component was part of the previous versions as well, but now it come with new features, among whom we can enumerate late binding, which allows for new peripherals to be added without changing the API. Together with these building blocks, Java ME Embedded 8 comes with numerous APIs; such is the one for web services or localization. Having all these Java ME Embedded components at our disposal, it’s within our grasp to build embedded applications, making a contribution to the Internet of Things space. Java ME SDK 8 comes to our aid; it is a complete toolkit, created to meet all our needs while creating and maintaining an application. This SDK offers an emulation environment that gives us the possibility to test our applications even if the target devices aren’t available at development

no. 31/january, 2015 | www.todaysoftmag.com

time. We can also debug both in emulation mode and when the application is running on the device. To make this toolkit complete, Oracle offers plugins for Netbeans IDE and Eclipse IDE which incorporate all the features of the SDK. We will discuss more things and give examples of how to use Java ME SDK 8 in one of the following articles. Java ME Embedded Client is a CDC (Connected Device Configuration) implementation which, in principle, is the configuration targeted to the mobile devices that have some more resources, such as smart phones. For Java ME Embedded Client this configuration has been narrowed and optimized to fit the low-end and medium embedded systems. Although the footprint of this configuration is small it offers a large portion of the Java language. Thus Java ME Embedded Client is destined to intelligent objects having less than 10 MB of memory and without graphical interface.

Conclusions

The Romanian IT industry is not foreign to the Internet of Things space; we began to see products Made in Romania released, such as Pocketo or Tintag. Also there are companies in Romania that are involved in projects fit for the IoT paradigm. For instance, we see such a project at Braşov in the automotive domain, with the concept of connected car. Another thing to be glad about is the fact that there are IoT events one of them being ALT Festival which took place in November 2014 at Braşov. An important objective for Oracle in the last period of time is the advancement of the Java platform in the fight that is being for among the IoT solutions. Moreover the corporation expressed its desire to win this battle so that Java would become the choice of the majority of the specialists involved in such projects. However the answer of the opponents was quick, a few voices expressing their skepticism concerning the fitness of the platform in the IoT space. There are many arguments both for and against this idea but one thing is certain: Java has come a long way to reach the current level of maturity, diversity and application. The efforts from the last years brought forth a new products family, that prove themselves promising and more than that begin to prove their value within real IoT projects. We will clearly see this thing in the next article when


TODAY SOFTWARE MAGAZINE [7] Benjamin Evans, Martijn Verburg, “The Well-Grounded Java Developer”, Manning, 2013 [8] “Java Development for the Internet of Things”, Oracle Java Magazine, November/December 2014 Issue [9] http://www.oracle.com/technetwork/articles/java/ma14-java-me-embedded-2177659.html

we will touch, through some examples, the practical side of the Java ME Embedded 8 platform.

Resources [1] CASAGRAS, “RFID and the Inclusive Model for the Internet of Things” [2] Stephen Haller, “Internet of Things: An Integral Part of the Future Internet”, SAP Research, 2009 [3] “The Internet of Things: Manage the Complexity, Seize the Opportunity”, Oracle Corporation, 2014 [4] “IDC Digital Universe Study”, sponsored by EMC, December 2012 [5] Dzone Research, “2014 Guide to Internet of Things” [6] http://share.cisco.com/internet-of-things.html

Dănuț Chindriș

danut.chindris@elektrobit.com Java Developer @ Elektrobit Automotive

www.todaysoftmag.com | no. 31/january, 2015

41


programming

What messaging queue should I use in Azure?

I

n theory, sending a message over a wire to another device is a simple task. But sending a message in a reliable and secure way can be a pretty hard job. In an IoT era, where every day the number of devices that are connected to the internet increases drastically, we need to find different communication mechanisms.

Radu Vunvulea

Radu.Vunvulea@iquestgroup.com Senior Software Engineer @iQuest

Because we cannot control when a device is connected to the internet and ready to receive our package, we need to find different ways to communicate with it. In this post we will look at different messaging systems that are offered by Microsoft Azure. For each messaging system we will try to identify its strengths and when we should use it. Once we understood each messaging system, we will compare them one by one. In the end of this post we will try to find a perfect messaging system that can be used in any situation. For different use cases we may need to use different messaging systems, based on our needs. The messaging solutions that will be discussed in this post are: • Azure Storage Queues • Azure Service Bus Queues • Azure Service Bus Topics • Azure Event Hub

Azure Storage Queues

This messaging system is part of Azure Storage and allows us to store a lot of messages in the same queue. When I say a lot, imagine queues that can reach not 1 GB, not 1TB and not 10TB. Using Azure

42

no. 31/2015, www.todaysoftmag.com

Storage Queues we can have a queue that has 200 TB. Because of this, we have the ability to store large amounts of data in queues, without thinking about the size of the queue. Another benefit of this type of queue is the number of concurrent clients that in theory is unlimited. The only limit in this case is the bandwidth that can limit the number of concurrent clients. The maximum size of a message is 64 KB, but in combination blobs we can have messages that reach 200 GB. Of course there are also some limitations that we need to take into account. First of all, even if the size of queue can be very large, the maximum TTL (Time To Leave) of a message is 7 days. This means that a message needs to be consumed in 7 days or renewed; otherwise the message will be removed. Even if we have support for base messaging capabilities like scheduler delivery and batch support, we don’t have support for state management, duplicate detection or transaction support. An interesting feature of Azure Storage Queues is the logging capabilities. Users have the ability to activate the loggings


TODAY SOFTWARE MAGAZINE

mechanism and track all the actions that are happening on the queue. Tracking information like client IP are tracked and stored as an out of the box solution. Clients have the ability to only peek at the messages from the queue, without removing or locking them. This means that if a client peeks at a message, other clients will be able to access the same message from the queue. Because of this, Azure Storage Queues are great when you need a messaging system that is able to track all the actions that are happening on it. It can be a good solution for use cases when you know that the size of the queue will be bigger than 80-100 GB. For large queues, this can be the best queue mechanism.

Azure Service Bus Queues

This messaging system is part of a more complex messaging infrastructure offered by Microsoft and supports more complex scenarios. Because of this, we will discover that the size of a queue is limited to 80 GB, but the features offered by Service Bus Queues are more complex. It is important to know from the start that these two messaging systems are constructed over different services and have nothing in common. The size of a message can be 256 KB, larger in comparison with Azure Storage Queues and also a message will be persisted in Service Bus for an unlimited period of time. On top of this, there is full support for AMPQ protocol, feature that can be very useful for embedded devices. There is support for dead lettering, which allows us to move automatically a message in a secondary queue, if the message expires or clients cannot consume a specific message. There is full support for transactions, handling a specific number of messages in the same transaction. On top of this, there is support to group multiple messages in the same session – in this way, we can ensure that a client will consume all messages that are part of a specific

session. If we know the ID of the session, we can consume messages from that specific session. An interesting feature of Azure Service Bus Queues is duplicate detection. Once it is activated, we can detect duplicate messages. The moment we want to add a message that already exists in the system, the message will not be added. This is great when we want to ensure that we have unique messages in the queue. Messages can be consumed from the queue in two different ways – Peek and Lock or Receive and Delete. We can peek at a message from a queue and make it unavailable for the rest of the clients until we confirm that we consumed it with success or abort the action (we can specify also a timeout). The security capabilities of Azure Service Bus Queues are more complex, we have the ability to control the access to the messages more deeply. Based on these features, Azure Service Bus Queues are great when we need duplicate detection, transaction support or to store messages for an unlimited period of time.

Azure Service Bus Topics

In contrast with Azure Service Bus Queues that allow us to deliver one-toone, Azure Service Bus Topics allow us to deliver messages one-to-many. This means that we can deliver the same message to multiple clients that are called subscribers. This messaging system is basically an ESB system (Enterprise Service Bus), allowing to have a “publish/subscribe” communication. From the perspective of the features, they are very similar with Azure Service Bus Queues with some additional features

and capabilities. The similarity of these two services is because both messaging systems are constructed over the same broker messaging infrastructure system. Each topic that is used to send messages can have maximum 2.000 subscribers. This means that the same message can be received by 2.000 subscribers. This can be very useful when we need to distribute messages to different systems. Also, a subscription can be added at runtime. We don’t need to stop the system or recreate the topic. Once a new subscription is created, all the new messages that are sent to that topic will be received by the new subscription also. An interesting feature is the filter support. We can attach a filter to each subscription. That filter will allow only the messages that respect the filter rule to reach that specific subscription. In this way, each subscription can listen to specific messages. Both messaging systems, Azure Service Bus Topics and Queues have automatic forward capabilities, but for Topics, they are more interesting. This means that we can forward a message from a subscription automatically to another Service Bus Topic of Queue. Each message that is sent over a Service Bus can have a collection of properties. Properties are used when a custom filter per subscription is applied. Also, each subscription can have a custom action that is executed when a message is received by it. Even if the actions that can be executed are very simple, it can be very useful in some situations – for example, we are allowed to change the name of the value of the property. Based on these capabilities, Azure Service Bus Topics are perfect to use when we need to distribute a message to multiple listeners. In systems where the number of clients that should receive a message can change dynamically, Azure Service Bus Topics can be useful.

Azure Event Hub

Even if, in theory, it is part of Azure Service Bus, Azure Event Hub is a special messaging system that is used to ingest large amounts of data in a short period of time. This system is capable to ingest

www.todaysoftmag.com | no. 31/january, 2015

43


programming more than 1 million messages per second without any kind of problem. It is constructed around a streaming concept. Because of this, all messages that flow in the system are seen like a stream of data. The latency is very low and even if the quantity of data that flows is very high the system is reliable and stable. An interesting feature of this system is the capacity to navigate between the messages that we already received. We have a concept similar with a cursor and we can iterate over old messages also – reply capability. On top of this, the stream of messages can reach multiple consumers in the same time using the consumer group concept. An important feature that can be found on Azure Service Bus is the message order preserving. It means that the order of messages is persisted. Consumers will be able to consume messages in the order they were sent. The Event Hub capabilities can scale pretty interesting, by adding multiple TUs

44

What messaging queue should I use in Azure? (Throughput Units). Each TU allows us 1 MB/s for ingress and 2 MB/s for egress. The retention of a message is 1 day, being able to store them up to 7 days. Of course, this comes with some costs. Features like death letter queue, transaction support or TTL options cannot be found on Azure Event Hub. This solution is perfect for high volumes of messaging processing, like telemetry or in IoT use cases.

What is the best solution?

There is no unique response for this question. Based on our needs and requirements, we may use different messaging systems. Azure Storage Queues are perfect when we need to store and manage large amounts of messages but, when we need more control, Azure Service Bus Queues can be a better solution. For use cases when we need to distribute messages to multiple listeners, Azure Service Bus Topic is the best. But, for large amounts of messaging processing, nothing can beat

no. 31/january, 2015 | www.todaysoftmag.com

Azure Event Hub.

Conclusion

In this post, we looked at different messaging systems offered by Azure and identified the most important features of each of them. We saw the most important use cases when we can use these systems and what the weak points are. At the end, I would like to invite you to go to Azure web site and discover more about these great services.


TODAY SOFTWARE MAGAZINE

legal

New Technologies – on the eve of Data Protection Day 2015

O

nce, Arthur C. Clarke, the author of the famous science fiction novel “2001 – A Space Odyssey”, stated: “Any sufficiently advanced technology is indistinguishable from magic .”

Each day, we are amazed by new and new wonders of science and technique which are becoming part of our everyday life. Things such as the Internet of Things, tablets, smart phones, mobile applications, drones, analytics software, online tracking tools and geolocation tools, self-driving cars, storage in the cloud, Google Street View, etc., are taking their place in our vocabulary and, often, in our habits. For some of us, fans of the Star Trek series (among which I include myself), the news about NASA announcing the creation of a Tricoder1 which can collect medical data about the patients and diagnose possible illnesses is an opportunity to get enthusiastic. However, for famous Stephen Hawking, helped by technology to survive and interact with those around him for many years, the technological bloom is a matter of gloomy forecasts2. Legal advisers cannot keep dreaming too much about the magic of new technologies, because of the legal challenges they encounter. Often, the question is whether legislation can keep up with it and if/ how it should be changed in order to reflect the new realities. Since on January the 28th we celebrate the Personal Data Day3 (called Data Protection Day in Europe or Privacy Day outside of Europe), I have decided to try 1 http://www.nasa.gov/centers/ames/researchpark/ news/partners/2013/scanaduscout.html 2 h t t p : / / w w w. m e d i a f a x . r o / s t i i n t a - s a n a t a t e / avertismentul-lui-stephen-hawking-inteligenta-artificiala-ar-puteaaduce-sfarsitul-omenirii-13697907 3 http : / / w w w. c o e . i nt / t / d g h l / s t an d ard s e tt i ng / dataprotection/Data_protection_day_en.asp

stepping out of the “technological daydrea- of Today Software Magazine. ming” and use this good opportunity to reflect a little bit upon some of our favorite Analytics software, online tracking and technologies which can collect and use our geolocation tools personal data in the most various ways. Nowadays’ technology produces and facilitates the access to a huge amount of Mobile applications, tablets and smart information and data. And those who are phones absorb data massively somewhat familiarized with the concept The protection of personal data gathe- of “big data”, probably already know that red by means of the mobile applications big data = big business. Some even dare (app privacy) remains a fresh subject which to state1 that personal data (even in their deserves the developers’ attention. anonymous form) can be considered “the Many mobile applications collect and new petrol”, having also an advantage: they use the users’ personal data. This implies the can be “extracted” from the most varied answer to a series of questions, such as: is it sources. a worldwide application or not?, what inforIn this respect, analytics software, online mation and data is being collected?, where tracking and geolocation tools play an is it stored?, how is it used?, if and to whom essential role, as they are powerful instrucan it be revealed?, etc. Taking the possible ments that can systematically track the legal risks into account, a well prepared users’ activity. They are capable of dealing policy regarding the personal data (Privacy with an enormous amount of data, analyze Policy) in order to be accepted by the user is them, combine them and look for patterns – not a fad, but a necessity. basically, in view of turning them into value In these circumstances, more and more for the companies. often it is recommended to implement But the value resulted from using these the concept of Privacy by Design – which data is accompanied by a potential abuse implies taking the possible legal risks into risk. Therefore, from the legal point of view, consideration right from the incipient steps the activities carried out in an inappropriof developing the applications (for instance, ate manner through the analytics software, before implementing a geolocation feature online tracking and geolocation tools may into the application). represent a major threat to the clients’ data You can read more details about this and to those of the users in the online enviconcept and the types of personal data that ronment and, thus, to the concept of data can be collected via mobile applications privacy. in the article “The developers of mobile In most of the cases, the perspective applications and the personal data. Any of the analytics industry may be that the connection?”4, published in the 21st issue user/ client is the one who decides on how 4 h t t p : / / w w w. t o d a y s o f t m a g . r o / a r t i c l e / 7 8 3 / to use the tools that are available to him dezvoltatorii-de-aplicatii-mobile-si-datele-personale-vreo-legatura and what data he wants to offer access to. www.todaysoftmag.com | no. 31/january, 2015

45


legal Nevertheless, there are certain rules and recommendations that need to be taken into consideration, so as to try preventing the risk of abuse (for instance, by correctly informing the users about the operations where their data can be collected and their online activity can be tracked, so that they can make an assumed decision – whether to accept it or not).

The drones – the UFOs in the sky

Drones can be commercial, of surveillance, but also light drones of small sizes, used as a hobby. They can all be intrusive, since most of them have embedded filming or data transmission devices which allow them to take pictures or shoot videos – most of the times, without the consent of the targeted people to be photographed or filmed. Since the image or the voice of a physical person is personal data, we understand why this activity is not exactly ok without the agreement of the respective person. In his report, TMT Predictions 20155, Deloitte estimates that the total incomes of the drones industry will increase to 200 – 400 million dollars in 2015 (the equivalent of the list price of a medium plane for passengers). However, in the short and medium term, it seems that the usage of drones will be limited, a key factor being the legislation. In some countries, there is no legislation in force or it is not certain; in others, they still apply the general rules regarding planes. In Romania, theoretically, the drones of a maximum take-off weight smaller than or equal to 1 kg can be freely used if: They are operated in an open, nonpopulated area (with no buildings meant as residence); They do not have devices for filming or data transmission mounted on them; and 5 http://www2.deloitte.com/global/en/pages/ technology-media-and-telecommunications/articles/tmt-preddrones-high-profile-and-niche.html

46

New Technologies – on the eve of Data Protection Day 2015 They do not go beyond the visual range of the person controlling the drone from the ground (but no more than a 150 m distance horizontally and a 100 m distance high).

the Privacy by Design procedure seems suitable to be applied also in the case of self-driving cars, attempting the implementation right from the first phases of the development of the car.

The self-driving cars – a necessary nightmare?

In the end

Happy Data Protection Day 2015 Self-driving cars are a “hot” topic. But, and… Let’s be private out there! besides some attractive advantages they present (efficiency, safeness, the reducing of accidents, etc.), they also give rise to controversies and risks regarding the usage of the data of the people travelling in these cars. As they will permanently have a wireless connection, the self-driving cars can be constantly tracked. And the location data and other information collected through such a car can be considered personal data of the people who are using these cars. Therefore, questions such as: Who should own or control the data collected by these cars?, What kind of data is stored?, Whom can they be transferred to and in what way?, To what purposes could they be used? – are relevant and should be taken into consideration when we put into balance the advantages and disadvantages of this new technology. In this context, it is natural that the public opinion asks itself whether, in exchange for these advantages, we are not actually giving up our freedom of movement to become followed on every step. In the media abroad 6 journalists draw our attention to the fact that we should be interested in the way these cars are developed in the present, since – otherwise – it is possible that in the future they may not be a symbol of individual freedom, but, on the contrary, a means of monitoring the individuals. Claudia Jelea claudia.jelea@jlaw.ro This points to the fact that, until there is a specific law for this technology, Lawyer 6 http://www.theguardian.com/commentisfree/2014/ jun/02/google-driverless-cars-safety-climate-privacy

no. 31/january, 2015 | www.todaysoftmag.com

@ Jlaw


management

Gogu and the alternatives game

Simona Bonghez, Ph.D.

simona.bonghez@confucius.ro Speaker, trainer and consultant in project management, Owner of Colors in Projects

“Well, finally…” Misu respired with ease seeing the name and especially Gogu’s face on the screen of his smartphone. But he hardly managed to say anything since the outpour of Gogu’s words broke out through the little device, as if the dam had given way: “What has happened, goodman, to make you call like crazy?! Do you think I am blind and can’t see you called the first time? Who died? Has something exploded? ‘Cause I can’t possibly have a quiet day! Just one day, pal, that’s what I allowed myself to take off. On good grounds. One day! And you are calling me five times in one hour. Five! May thunder and lightning come over your phone anytime you reach for it again!. Or is it that you are suicidal and you want me to fix this problem for you, which would make me extremely happy right now…” He probably ran out of air, as the stream was interrupted and one could hear Gogu gasping for breath. You might have expected Misu to jump to the opportunity and begin to talk, but it seems he had foreseen such a reaction and now he was waiting for the “flood” to calm down. And, indeed, once his lungs refilled, Gogu lost some of the initial steam. He only added: “So?” A broad smile appeared on Misu’s face: he had got away with it much cheaper than he had imagined. Making no haste at all, he told Gogu about the problem he was dealing with and because of which he had insisted on reaching him, even on his day off. He was careful, though, not to bring this aspect up again; it wasn’t a good time to bring the broadside about again. “Let me see if I got this right”, Gogu tried to summarize the information. “Chief asked you to find a project manager for the implementation required by the client. And you insisted on giving him the answer right away, which is by no means the kind of attitude you normally adopt; and he didn’t agree with your proposal. And now, the second round: you want to suggest Tibi, but you don’t want to take any more chances and, therefore, you make phone calls like a desperate and you want me to take this matter off your shoulders and take responsibility in case of a possible, even probable, failure. What a clever boy you turned into, overnight!” There came another refilling of the lungs. Misu’s penitent face showed he was ready for the worse. But just like the last time, things had calmed down at the other end of the wire. For those who knew Gogu, it was obvious that he had entered the searching for solutions mood. This was another thing Misu liked about Gogu: invariably he would end up thinking about solutions. All it took was showing him the problem, just like you show a stick to a dog. Except that Gogu doesn’t wiggle his tail, thought Misu and banished the image of Gogu going on his hands and knees, happy to have the chance of running after the stick. “Any idea, Gogu?” Misu dared to break the silence. “Any idea, any idea! … I was thinking about Cristina. She is mature enough, she has proved it in the project with the English, she has experience in working in a virtual team and she www.todaysoftmag.com | no. 31/january, 2015

47


management

Gogu and the alternatives game

already knows many of those she will have to collaborate with. It won’t be easy with the client, but that’s it, if it had been easy, anyone could have done it. She deserves a chance.” Misu kept silent. “Well, say something”, Gogu insisted. “Even with your speed of reaction, the information still must have reached your brain by now. Earth to Misu: Hello! Or is it that you think she is not ready yet?” Misu was really confused. “But I was thinking about Tibi… He is much more experienced.” “Well, yes, you tough guy, but if you go again with a single option to Chief, you give him a choice between yes and no. And if the answer is no once more, you will be again at your wits’ end. You have to give Chief two viable choices; he will compare the nominations and will choose the best of them. Thus, you no longer run the risk of leaving him without having solved the problem. This is how the human mind works: it compares alternatives, options; if you offer Chief this possibility, his mind will explore the opportunity of comparing and choosing the best of the presented alternatives. Do you get it? Tibi or Cristina. Tibi, more experienced, but already involved in a difficult project, blabla-bla, versus Cristina, without too much experience, but eager to learn, to prove, to assert herself, bla-bla-bla; you add a thing or two. “What do I add? The bla-bla?” Misu was making fun of Gogu. “Look, you are getting on my nerves. Understood?” “Yes, sir. But what if it doesn’t work? You know Chief is kind of whimsical… Anyway, I will call you as soon as I talk to him…” “Do not call me!” Gogu stressed every word. “I believe I’m going to be out of signal, I can no longer hear you, Misu… hello, hello… I’m going out of the signal area and coming back tomorrow morning. Bye!” Gogu cut the conversation off, smiled to himself and resumed placidly his work on the bicycle. “Daaad…” “Oh, no! What is it with you people today?! What do you want from me?” Gogu turned hopeless towards his son. “This is definitely not my day. Why can’t I enjoy this beautiful day, the bicycle, working on it or simply doing nothing?! Tell me, what now?”

48

no. 31/january, 2015 | www.todaysoftmag.com

But his son knew his role of grumpy guy on duty and did not take his theatrical mumble seriously. He began without being discouraged by his father’s apparent unfriendly attitude: “I would like to go to Andu’s birthday party. What do you say: can you come and pick me up around 11 -12? Or, so as not to disturb you, I could sleep over at his house. We can play Assassin’s Creed, have a little chat…” “What? What do you mean sleep at Andu’s? Don’t you have a home? You can play those assassins from your home, too. You’d better play Fifa2014.” “Oh, all right, then. Please call me at about half past eleven, a quarter to twelve, when you are on your way to pick me up from Andu’s. Thanks, dad!” The kid turned around pleased and went away, whistling. Gogu remained speechless when he saw the boy’s goodhumor and wised up to the trap. But it was too late: a promise is a promise. And he is also happy that his father would pick him up, as if it hadn’t been more natural for the child to come home at 9 or 10 in the evening, at the latest. God, this kid had turned him round his fingers. He had given him two options of which Gogu, himself!, chose the most convenient. Ufff! The same mechanism he had explained a few minutes earlier to Misu, had been applied – successfully! – on him. Ha! And a brilliant idea hit him: “Wait a minute… Tomorrow, instead of asking Chief whether he agrees to let 6 people go to the mammoth Conference, I will offer him two alternatives: only six people from our department or, even better, we also take our colleagues from testing and we will be 10 people going… This is cool!”



sponsors

powered by


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.