Issue 25/July 2014 - Today Software Magazine

Page 1

No. 25 • July 2014 • www.todaysoftmag.ro • www.todaysoftmag.com

TSM

T O D A Y S O F T WA R E MAG A Z I NE

Agile Design Principles Scrum with Extreme Programming

PostgreSQL’s JSON support GO and find the tool Data Integration with Talend Open Studio Securing Opensource Code via Static Analysis (II) iOS 7 blur

RICAP: innovation from lab to market ZenQ Gogu and Mișu’s Old Man The Cluj IT Ecosystem from the startup perspective Java 8, news and improvements



6 The Cluj IT Ecosystem from the startup perspective

22 GO and find the tool

Ovidiu Mățan

Marius Ciotlos

10 RICAP: innovation from lab to market Silvia Ursu

11 ZenQ Mihai Costea

12 On the Relevance of Student Work Practice Andrei Kelemen

15 Romanian IT quo vadis Ovidiu Simionica

17 Java 8, news and improvements Silviu Dumitrescu

19 PostgreSQL’s JSON support Raul Rene Lepsa

24 Scrum with Extreme Programming Alina Chereches

27 Agile Design Principles Dumitrița Munteanu

31 iOS 7 blur Mihai Fischer

33 Data Integration with Talend Open Studio Dănuț Chindriș

35 Securing Opensource Code via Static Analysis (II) Raghudeep Kannavara

39 Gogu and Mișu’s Old Man Simona Bonghez


editorial

T

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

he last local novelty in Cluj is the purchasing of LiveRail by Facebook. All newspapers have already published this information and I am sure that many of our magazine’s readers know the details of this transaction. By the time this piece of news appeared, I had already finished writing my article on The IT Ecosystem in Cluj, from the startup perspective, which I ended in a reserved optimistic tone. The main reason was the lack of a real success, which might be shown as an example. However, now, discovering LiveRail as a company whose value was confirmed by its being purchased by Facebook, things are much better, on paper at least. Still in the area of promoting the Romanian IT, it gives me pleasure to announce the second edition of IT Days, www.itdays.ro, on December 3rd-4th 2014, an event organized by Today Software Magazine, which brings on stage the best local specialists in the IT area. We will have two sections dedicated to technical presentations, one dedicated to trends and another one for startups and university projects. The first quests who accepted to join us are: Bogdan Iordache – How To Web organizer and co-founder of TechHub Bucharest, Josef Dunne – co-founder of Babelverse and Silviu Dumitrescu – Java expert and Accesa Line Manager. The list is only at the beginning and we will come back with details. Because it is holiday, we have prepared a contest for our readers, in collaboration with Honda Cluj. It gives you the opportunity to win a car for a trip, for a weekend, plus a full gas tank. The extraction of the winner will take place in the event of launching the 25 TSM issue of the magazine. In this issue, you will find a series of articles on startups. The IT Ecosystem in Cluj from the startup perspective presents to you the most important elements involved in supporting the innovating ideas, as well as a list of the most promising startups. ZenQ, a new startup, proposes to you a new modality to thank your friends. The RICAP program invites startups to a coaching program, and Danis Foundation presents the achievements of the financial education program for young entrepreneurs, “Drive your community for better”. Students are in the limelight, and Cluj IT Cluster suggests a new structure of university practice in “On the relevance of student practice”. I would like to enumerate the technical articles of this issue: Java 8, news and improvements presents part of the new changes in Java 8, which will alter the way we write Java code. The JSON support in Postgress describes the original approach added to the last version of Postgress. Agile Design Principles brings to your attention a series of design patterns which can adjust class architecture to Agile principles. Enjoy your reading !

Ovidiu Măţan

Founder of Today Software Magazine

4

no. 25/July, 2014 | www.todaysoftmag.com


Editorial Staf Editor-in-chief: Ovidiu Mățan ovidiu.matan@todaysoftmag.com Editor (startups & interviews): Marius Mornea marius.mornea@todaysoftmag.com Graphic designer: Dan Hădărău dan.hadarau@todaysoftmag.com Copyright/Proofreader: Emilia Toma emilia.toma@todaysoftmag.com Translator: Roxana Elena roxana.elena@todaysoftmag.com Reviewer: Tavi Bolog tavi.bolog@todaysoftmag.com Reviewer: Adrian Lupei adrian.lupei@todaysoftmag.com Accountant : Delia Coman delia.coman@todaysoftmag.com Made by

Today Software Solutions SRL str. Plopilor, nr. 75/77 Cluj-Napoca, Cluj, Romania contact@todaysoftmag.com www.todaysoftmag.com www.facebook.com/todaysoftmag twitter.com/todaysoftmag ISSN 2285 – 3502 ISSN-L 2284 – 8207

Authors list Silviu Dumitrescu

silviu.dumitrescu@accesa.eu Java Line Manager @ Accesa

Raul Rene Lepsa raul.lepsa@3pillarglobal.com Java Devolper @ 3Pillar Global

Raghudeep Kannavara

Marius Ciotlos

Security Researcher, Software and Services Group @Intel USA

Delivery Manager @ Betfair

Alina Chereches

Andrei Kelemen

Senior Software Developer & Scrum Master @ Yardi

Director executiv @ IT Cluster

Dănuț Chindriș

Mihai Fischer

Java Developer @ Electrobit Automotive

iOS developer @ Dens.io

Mihai Costea

Simona Bonghez, Ph.D.

iOS Developer @ Zenq

Speaker, trainer and consultant in project management,

raghudeep.kannavara@intel.com

alina.cherecheș@yardi.com

danut.chindris@elektrobit.com

mihai.costea@gmail.com

marius.ciotlos@betfair.com

andrei.kelemen@clujit.ro

mihai.fischer@gmail.com

simona.bonghez@confucius.ro

Owner of Colors in Projects

Dumitrița Munteanu

dumitrita.munteanu@arobs.com Software engineer @ Arobs

Ovidiu Măţan

ovidiu.matan@todaysoftmag.com Editor-in-chief Today Software Magazine

Ovidiu Simionica

ovidiu.simionica@fortech.ro Team Lead @ Fortech

Silvia Ursu

silvia.ursu@cridl.org Communications Coordinator @RICAP

Copyright Today Software Magazine Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code www.todaysoftmag.ro www.todaysoftmag.com

www.todaysoftmag.com | no. 25/July, 2014

5


analysis

T

The Cluj IT Ecosystem from the startup perspective

he “ecosystem” term of the title is used as a metaphor of the economic context in Cluj, which establishes close relations between organisms such as the IT community, universities and financing resources. Each of these, the IT community through execution, the university through research and the local financing resources by sustaining the first two, are involved in creating a network of interdependencies and conditionings. The emergence of startups within this ecosystem is interpreted as an instrument of measuring innovation and entrepreneurial culture. Although the first steps taken in this direction are relatively shy, the importance given to them and the local support are growing. Many of the local events in the IT area have a special section dedicated to startups: Cluj Innovation Days, IT Days, Techsylvania. We can find the same trend even within larger events, such as Cluj Business Days or TedX. All these prove the importance given by the community to the development of some local products, the generation of that IP (Intellectual Property) which provides, at the local level, a better stability and greater development potential. However, for the success of a startup on a global market, it is necessary that the whole ecosystem helps it grow from idea to implementation, visibility and profit. Victor Hwang has very nicely defined the overall picture. The classical outsourcing industry can be viewed as a farm where the planted seeds grow very fast, they all have the same size, the exceptions are few and generally do not represent a benefic thing. Alternatively, in a forest, the planted seeds, though they have a small chance to survive, can grow to become massive trees, which can live for a long time. In this environment, it is ok to fail, and this leads to diversity, as opposed to the farm where all the seeds have to become mature plants. In both examples, what matters the most is the soil where these seeds are growing, which to us represents the ecosystem. The development of some new products means communication, trust and sharing the acquired experiences. We will go on to analyze the main factors that define the Cluj IT ecosystem.

Accenture has bought Evoline, Nok startup was taken by Intel or Skobbler, purchased by Telenav. From the perspective of personal development, the outsourcing offers the opportunity to work on important projects, as well as the assimilation of part of the client’s culture, of the way they work. From the more general perspective of corporations, the outsourcing gives access to a part of the culture of the other corporations, communication between different projects, but also meetings with teams from different corners of the world. If ten years ago, in Cluj there was only mere execution, now we are dealing with a more and more extended ownership. Generally, the architecture of systems is now done locally, and product managers and business analysts are very valuable. This also means a lot of trust and acknowledgement of the quality of the software developed in Cluj, and the tendency is to focus on quality in spite of the costs. As a matter of fact, the higher and higher costs, which are also reflected in the programmers’ salaries, also represent an important impediment in the development of startups. At the same time, it creates that coziness chosen by the majority to the detriment of trying to develop their own products. The local software market has reached a first level of maturity. There are employees who have fifteen years of experience in the domain, but their share is relatively small. The market is generally young, without having grey-haired programmers yet. From the recruiting perspective, there is a continuous battle for attracting talents. Unfortunately, there are situations where people have dedicated 3-4 years to a startup in order to be employed. IT Community This can be a problem, but, on the other hand, the assimilation It is mostly made of IT companies which develop outsourcing of a company’s culture and a better understanding of the product products. There are some local exceptions, among which I would development process can lead in a few years towards the creation like to mention two, belonging to the games industry, Exosyphen of a winning startup. and Idea studio, but also those of some big companies such as Arobs or Skobbler, which has now become Telenav. Universities An important number of companies are those which are part In the two universities in Cluj, Babes-Bolyai and the Technical of an international company, they have ownership for the develo- University, approximately 1000 students are prepared to become ped products and there is a R&D local division. Their number is programmers. Their number is small, compared to the necesstill growing and we can enumerate some of them such as: Betfair, sities of the market, and this can be noticed in the bigger and Yardi, Tora, HP, Ullink, SDL or Gameloft. bigger number of companies which open branches in other cities. An interesting category is represented by the companies which Moreover, alternative programs such as 42.fr or Ruby Girls are activated on the local market and have been purchased, being thus developed. integrated in the global market: NTT Data has purchased EBS, The importance of universities and their role in the context of

6

no. 25/July, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

the development of local startups is rather small. We are trying, through the annual IT Days event, www.itdays.ro and also through Today Software Magazine, to promote the knowledge and projects completed in universities. They have to turn into an engine of innovation and of the latest technologies, similarly to what Standford is doing for Silicon Valley.

Investors

At the moment, the candidate selection is in full progress.

Social projects A more popular alternative, especially for the IT companies which activate in the outsourcing area and want orientation towards products, is represented by the social projects. A free project is carried out for the community. A suggestive example is the application called “Statui de daci” (Dacian Statues), which promotes the work of historian Leonard Velcescu. Locally, one can take advantage of the opportunity given by Cluj next year, when it will bear the title of European Youth Capital. The accomplishment of such a project will mean a plus for the community, but also a simple way to promote a brand and go through the challenges of developing a product.

The main investments carried out until now in the startup area have been represented by the two accelerators from Bulgaria: Eleven and LAUNCHub. Five local startups have received investments of approximately 30.000 euro, but unfortunately they were all closed down. Among the things we can learn from this approach is the fact that the money granted was not enough for reaching the level of profitable business, ideal on the international level and that the necessities of the projects were not covered in respect to the development, marketing or legal part. Other structures

Crowdsourcing

Cluj IT Cluster

It represents a simple financing alternative. The two local platforms, Crestem Idei (We Grow Ideas) and Multifinantare (Multifinancing), have not succeeded in attracting the public to a large scale, the projects and the amounts thus financed being at a low level. Recently, the Multifinantare project has entered a collaboration with Babes-Bolyai University, by creating a portal to support the academic projects.

It is constituted from a group of Romanian companies, having as a main goal the attraction of some big projects which could not be carried out by a single company. At the same time, there is efficient communication between the companies involved, generating the conditions for a better coordination of actions. The annual event organized by the cluster, Cluj IT Innovation Days, brought together local companies, government representatives, European Union representatives, as well as researching projects. Recently, Angel investors Today Software Magazine together with Cluj IT Cluster have If we draw a parallel to other development centers, part on organized in Brasov the event called: IT in Brasov – Collaboration the amounts invested in the local initiatives come from those who Opportunities. The public showed real interest and we intend to have achieved an exit. We can thus enumerate Phillip Kandal, one have more events of the kind. of the four founders of Skobbler, and Daniel Metz, who sold EBS to NTT Data. For now, the two investors have not announced any Startup Weekend investment in the IT area, but we are expecting this to happen It is an event dedicated 100% to creating startups. If you during the next year. haven’t participated yet to such an event, I suggest that you do. Only new ideas are accepted, and the teams are formed around Accelerators the most popular pitches. It is a global phenomenon and many Gemini Solutions Foundry comes with a different approach. startups were born this way. The winners of the last three years The Accelerator offers everything is needed to bring the startup to are: Mircea Vadan with the Use Together platform, the Cloud the MVP stage. This means technical support, mentorship, legal Copy team and this year’s winner was an old collaborator of the support, even work space situated centrally in Bucharest, Cluj or magazine, Antonia Onaca, with an idea that should change the Iasi. The purpose of this MVP is to present the startups to some way employees are evaluated. big investments funds from USA in view of obtaining financing. www.todaysoftmag.com | no. 25/July, 2014

7


analysis The Cluj IT Ecosystem from the startup perspective Techsylvania Hackaton Organized within the first Techsylvania edition, it made available to the participants different mobile devices such as Google Glasses, Leap Motion, Sphero, Little Printer, Pebble and many others. The result was spectacular, the mini-projects accomplished being really interesting. They combined the use of Google Glasses with the Leap Motion in order to complete a game or the typing of a cod in an ATM by monitoring the eye movement. Probably a combination of Startup Weekend and the hackaton where the latest gadgets are made available would lead to the creation of truly innovative startups.

Today Software Magazine and IT Days One of our declared goals ever since the first release of the magazine was to support startups. Today Software Magazine is supportive of the majority of initiatives in this area and we help promoting them. Moreover, through the annual event IT Days, which will take place on December 3rd -4th this year, we will present the most important local startups.

Conclusions

The startup phenomenon and the orientation of the companies activating in the outsourcing area towards product creation are continuously growing. We are glad to be able to give an example of real success: the company founded by two citizens of Cluj and an American, LiveRail, has been recently purchased by Google. This proves beyond any doubt the value of our local ecosystem and education. The advice we give to those who wish to create a startup is to participate in as many local and international events as possible, since personal relations are very important at the beginning of the journey. It is not likely for your idea to change the world tomorrow, if you do not interact with a lot of people. There are many opportunities, which we notice especially in the fact that programmers want more and more to create their own products and local accelerators are beginning to make their presence felt. The outsourcing developed products or those which are part of a big corporation have proved the fact that from the technical point of view we are able to bring anything to life. Unfortunately, due to confidentiality, most of them cannot be made public. Universities are becoming more and more open to communicate with specialists that are not part of the academic world and we hope to see in the future more courses on entrepreneurship and, why not, even an accelerator for students, where practice and research would be reunited.

wish to improve security by real testing. HipMenu4 - is an application addressing those who want to order food at their office or at home. Marius Mocian, a local startup supporter, is part of this team. Evolso5 - a startup started by Alin Stănescu, is now part of the acceleration program of StartupYard. Mira Rehub6- one of the most promising local startups. They have achieved a system of recuperation of locomotory challenged people by using the Kinect sensor. Startups in Cluj Mockups7 - implement the creation of online mockups, having Next, I will propose to you a list of local startups which are a large online user base. worth keeping an eye on. I would like to thank Mircea Vadan and Marius Mornea for completing it. In the future issues, we will also present an infographic. Ovidiu Măţan ovidiu.matan@todaysoftmag.com Squirrly1 - It is a wordpress SEO plugin. The company was founded by Florin Mureșan and it already enjoys a big number Editor-in-chief of clients. It is also sustained by Phillip Kandal, co-founder of Today Software Magazine Skobbler. HackaServer2 and CTF3653 - an old startup in Cluj, run by Marius Corâci and Marius Chiș. It is addressed to those hackers 4 https://www.hipmenu.ro who want a legal challenge and to system administrators who 5 http://www.evolso.com 1 http://www.squirrly.co

6 http://www.mirarehab.com

2 http://hackaserver.com

7 https://moqups.com

3 http://ctf365.com

8

no. 25/July, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE

www.todaysoftmag.com | no. 25/July, 2014

9


event

R

RICAP: innovation from lab to market

ICAP was designed starting from the desire to provide resources for Romanian innovators to exit to international markets with their products, whether it is energy, bio-technology, agriculture, ICT or any other technological field.

Thus, RICAP is the first program in Romania supporting innovators and entrepreneurs with innovative technologies to help commercialize them on the global market, supporting them all the way from lab to market. In this effort, the program relies on an international partnership with one of the most important institutions in the U.S. that supports the commercialization of innovation - Larta Institute, on the international connections and know-how of this network and also the network of mentors that we strengthen locally.

”When talking about business and startups issues, I say every time that money is part of the problem, but the most important problem is the lack of know-how (…) The experience we had in RICAP was first of all a coaching experience. That’s why I have applied and I sought an environment that encouraged me to study, on a schedule that I wanted, considering the priorities we believe we have, and attacking problems step by step” Daniel Homorodean, CEO Arxia, entrepreneur in the program. The applications for the second edition are open until July 31, 2014, directly on the program website: www.ricap.ro.

What happened in the program?

The first edition took place between January and May 2014. In this period, 15 innovators worked with a dedicated team of US-based commercialization advisors and Romania-based local mentors to further develop their knowledge and commercialization tools, as well as to connect to a global network of potential partners and investors. Depending on the companies’ level of development, the program has helped more than 30 meaningful strategic introductions with potential partners and funders in the U.S. and Europe, including members of the Fortune 1000 of the Larta’s Industry Advisory Board. In addition, two companies went to the United States, where they had been arranged 15 business meetings with potential partners and funders. The experience was

10

no. 25/July, 2014 | www.todaysoftmag.com

different for the 15 participants. “This program can adapt to the needs of every participant, wherever you are in the spectrum, from a pure scientist to businessman.” Alexandru Floares, SAIA and Onco Predict, scientist and entrepreneur in the program. In June, we launched the second edition of RICAP and we organized events and meetings in several cities. We had the opportunity to meet passionate and visionary innovators who developed incredible products. Meeting all these people and having vibrant discussions with many of them validated that what we do in RICAP can provide real support to both them and the users of their products. Along with the innovators selected for the program, the Romanian mentors have played a key role in advising the innovators and defining together the commercialization strategy and not only. We had incredible mentors with business vision and expertize: Andrei Pitis, teacher, entrepreneur, business angel and tireless supporter of the ecosystem of innovation and tech start-ups in Romania, Norina Boru, entrepreneur and consultant with extensive experience in the medical field both in Romania and internationally, Alex Mircea Dascalu, an expert with international experience both as an entrepreneur and consultant, Sanda Foamete, education lead at Microsoft and Florin Talpes, from Bitdifender, also an experienced entrepreneur. You can read more about the mentors and innovators who participated in the first edition on our blog: www.ricap.ro/blog.

Silvia Ursu

silvia.ursu@cridl.org Communications Coordinator @RICAP


TODAY SOFTWARE MAGAZINE

startups

ZenQ - “Ze way to say ‘thank you’ and appreciate your awesome friends and colleagues”

W

e believe that every single human is amazing and should hear it more often. That’s why we are building ZenQ: ze way to say thank you and appreciate your awesome friends and colleagues. On your mobile. In zeconds.

Started at Startup Weekend Cluj in March as an iOS-only MVP, the project has grown quickly, with the iOS and Android apps live in the stores since the 7th of May. How does ZenQ work? Imagine browsing through your facebook friends and endorsing them, just like you do on Linkedin, but this time for their positive qualities (fun, smart, creative). Someone made your day and you want to go further than this? You can find them in the app and leave a special note showing them how amazing they are. In the end, each of us gets a profile where you are able to discover your key strengths through the eyes of your friends. So essentially, ZenQ is all about spreading positive vibes, having fun, being happy.

made available the new iOS and Android versions. We are looking forward to getting more feedback and learning about what would make users happier and more engaged with the app. First, we believe that it can strengthen the optimism and positive interactions in various communities, organizations or companies. It happens often in these kinds of environments, that we focus so much on accomplishing tasks that relationships fade and this, on long term, really damages the group strength. In the second scenario, we believe users would enjoy the magical fun and easy ZenQ interaction and will get engaged in this social game of endorsing their friends.

Behind the scenes, ZenQ is powered by a Django backend which uses the Facebook login information gathered by the clients to collect the user’s friends from Facebook and provide them back to the clients when requested. The list of traits is also provided by Please visit www.zenq.co to get the app on your smartphone the backend service making it easy to update based on feedback and give us your feedback at contact@zenq.co. ZenQ very much! from the users. This information is then used to create the user profiles which now shows a list of traits a user has been endorsed with ordered by the number of endorsers. The “face” of ZenQ are the mobile clients: iOS and Android. The interface uses a very simple navigation paradigm in which it only takes 2 taps at most to get to any screen at any time. Most of the focus though is on the “ZenQify” screen which is the first screen the user sees when opening the app and they can easily get back to it when opening other screens making it the central point Mihai Costea of the app. Users can also endorse specific friends by searching and mihai.costea@gmail.com accessing their profile where they can also leave a message linked iOS Developer to the trait they want to endorse for. @ Zenq ZenQ is a lot of fun to develop and feedback from a few hundreds of beta users has been positive so far. Just a few days ago we www.todaysoftmag.com | no. 25/July, 2014

11


education

On the Relevance of Student Work Practice

O

ne of the main concerns of Cluj IT Cluster is the human resource of the industry. It is well known that the IT in Cluj, still substantially based on outsourcing services, is in a constant need of well-trained people, who should be as numerous as possible.

It is also a known fact that most of the IT companies in our city are permanently in search of new talents. According to the existing data, also confirmed by our members, we are talking about hundreds of jobs available, for which it is difficult to find the right candidates. It is also an established fact that the salaries in the IT industry are on average way above the national level and that, when comparing the purchasing power, they have become competitive at the global level, too. Parenthetically, I would like to mention an interesting phenomenon of work force migration, which is at the same time paradoxical, a phenomenon which one CEO of a Cluster company has called “reversed outsourcing�. Basically, more and more companies are trying to fill in the local shortage of talents on the labour market by importing from other countries and, surprisingly, not from those having a lower living standard than that of Romania. But to what extent are all these familiar to those whom we would like to see choosing an IT career? Do they also know these realities? Or is it, rather, that we live with the impression that the realities close to us

12

(as level of knowledge or interest) are as well known to the others as they are to us? By all appearances pencilled by the concrete situation of misaligning the offer to a certain demand of the labour market, these realities are not known or, to the best case scenario, are very little known. Given this situation, one may ask the natural question, namely: why exactly is this happening and which are the mechanisms by which we could intervene to make the talents we need available? There are several answers in respect to causes, as well as to solutions, but, as space is limited, I will only refer now to the way student practice is organized in Romania, practice which should be, at least theoretically, a powerful tool of insertion on the labour market. The curriculum stipulates the compulsory completion of a professional practice stage, which usually means 90 hours, for a number of credits. The 90 hours can be distributed along the two semesters, which actually happens in reality. The fragmentation of an extremely small number of hours is, from my point of view, a mistake. Practically, the student doesn’t have the

no. 25/July, 2014 | www.todaysoftmag.com

necessary time to understand and draw conclusions in regard to what is going on in the company / organization he/she has entered, whether that institution can or cannot be a real career option. Moreover, the practice stage is defined, by law, as a freestanding subject! All these lead me to a natural conclusion which says a lot about the way this instrument is perceived. To be more explicit, I will bring examples from other European countries. In the Netherlands, for instance, the master programs stipulate at least one semester, if not even an entire practice year in the domain the person is training for. In Germany, for example, they have implemented special dual programs of vocational type, which combine the practice stages with the theoretical ones, ensuring thus a consistent group of people who are well trained for what the economy has to offer. The examples can go on, of course, and they all point to a concern for putting up and sustaining an efficient mechanism of insertion on the labour market. The framework in which we are carrying out our activities in Romania is not,


TODAY SOFTWARE MAGAZINE

as one can notice, a very favourable one, but this does not discourage us. As part of the efforts to bring to the attention of young people, but not only, the career and life opportunities offered by our industry, Cluj IT Cluster has begun a project by which we are trying to achieve that level of knowledge that is necessary for an informed decision on professional training and development, especially for the young people who are attending university courses or intend to go to university. The program is wider, it contains more phases and levels of action, some of them in a more advanced stage of preparation, others still in the idea stage. This is neither the place, nor the time to go into details, but, a first step has already been taken. Cluj IT Cluster is partner in a European non-reimbursable grant with BabesBolyai University, through which we will facilitate the access to work practice stages organized mostly in companies and organizations members of the Cluj IT Cluster for a number of 400 students from the Faculties of Mathematics and Informatics and the Faculties of Economic Sciences and Business Management (FSEGA). The project is entitled “Enhancing employability opportunities through successful practice� (PRACT-IT) and it is co-financed from the European Social Fund through the Sectorial Operational Programme for Human Resources Development 2007-2013. Beyond the statement of purpose and the bald objectives of a project, we wish this initiative to become one through which to be able to make the IT industry better understood, in terms of the opportunities

it offers, as well as the rigors required by the employers in this field. This is also the reason why the project design included right from the beginning the students coming from FSEGA, not only those from Mathematics and Informatics. In other words, we wished to go beyond the classical profile of the employee coming from a faculty where the industry is better known and to extend thus the knowledge circle in other training domains, too. The project eventually pursues the enhancement of both the relevance of studies and the competencies acquired during the learning stages by strengthening these skills within some work practice stages, as well as a better insertion of those included in the program on the labour market. The enhancement of employment opportunities will be complementarily and in advance ensured by information and career advice actions for a number of 450 students. The students who will participate in the project activities will come from the following specializations: mathematics, informatics, economic informatics, statistics, marketing, international business, general economy and accountancy and they will be selected on the basis of a transparent process for taking part in the project. The project has an implementation period of 18 months and it started on the 5th of May 2014, but the actual activity with the students will begin with the new university year, that is, in October 2014, first by their selection, then by going through the career counselling stage and, eventually, the work practice stages. We hope to manage to institute a new model for the way these practice stages are carried out and

that the participating students will succeed in taking advantage of a real career opportunity. As the IT industry in Cluj looks like today, it is all up to them.

Andrei Kelemen

andrei.kelemen@clujit.ro Executive director @ IT Cluster

www.todaysoftmag.com | no. 25/July, 2014

13


communities

IT Communities

J

uly comes with less events. We are proposing you to attend at Business Days of course our launch event of the 25th issue of Today Software Magazine.

Transylvania Java User Group Community dedicated to Java technology Website: www.transylvania-jug.org Since: 15.05.2008 / Members: 582 / Events: 44 Comunitatea TSM Community built around Today Software Magazine. Website: www.facebook.com/todaysoftmag Since: 06.02.2012 /Members: 1606/ Events: 20 Cluj Business Analysts Business analysis community Website: www.meetup.com/Business-Analysts-Cluj Since: 10.07.2013 / Members: 77 / Events: 6 Cluj Mobile Developers Mobile technologies community Website: www.meetup.com/Cluj-Mobile-Developers Since: 05.08.2011 / Members: 196 / Events: 13 The Cluj Napoca Agile Software Meetup Group Community dedicated to Agile methodology. Website: www.agileworks.ro Since: 04.10.2010 / Members: 433 / Events: 76 Cluj Semantic WEB Meetup Community dedicated to semantic technology Website: www.meetup.com/Cluj-Semantic-WEB Since: 08.05.2010 / Members: 184/ Events: 27 Romanian Association for Better Software Community dedicated to senior IT people Website: www.rabs.ro Since: 10.02.2011 / Members: 244/ Events: 14 Testing camp Project which wants to bring together as many testers and QA. Website: tabaradetestare.ro Since 15.01.2012 / Members: 323/ Events: 31

14

no. 25/July, 2014 | www.todaysoftmag.com

Calendar Iulie 9-10 (Cluj) Cluj Business Days businessdays.ro/Evenimente/Cluj-2014 Iulie 19 (Iași) Iasi Inaugural MUG mongostat meetup.com/Iasi-MongoDB-User-Group/events/191672362/ Iulie 14 (Cluj) Personalized information discovery meetup.com/Cluj-Semantic-WEB/events/186829692 Iulie 19-20 (București) Startcelerate bucharest.startcelerate.com Iulie 22 (Cluj) Launch of issue 25/july of Today Software Magazine www.todaysoftmag.ro Iulie 23 (Cluj) Requirements Engineering - Factor of successful projects meetup.com/Business-Analysts-Cluj/events/192771622/ Iunie 28 (Cluj) Mobile Monday Cluj #10 meetup.com/Cluj-Mobile-Developers/events/177046842/


programming

Romanian IT quo vadis

I

nspired by the “What is wrong with the Romanian IT?” article published by Ovidiu Șuța from ISDC in the 23rd edition of TSM, I feel the need to use my imagination to envision a possible direction for the Romanian IT.

Ovidiu Simionica

ovidiu.simionica@fortech.ro Team Lead @ Fortech

The main idea of the article, which I fully share, is that the volume business that defines the outsourcing model has become harmful for the very market it targets. Let us skip what can be debated forever and see how we can reshape the Romanian software industry. Just like in surgery, it all depends on the method used to perform the intervention. Combining the technical know-how with the work technique must lead to perfection. We therefore have to solve an equation with two parts: personnel training and the quality of the execution. Expertise and capability building has been witnessing an upward trend over the last ten years. The Internet abounds in information and certification companies reached maturity, which means one can get the expertise certified in any area of software development from framework management and specific programming languages to ISTQB validated testing abilities. The only hurdle faced by the professional in this process can be the way companies apply it. Learning is an intrinsic part of office work and any business that aims at surviving more than 30 years must account for this fact when shaping its strategy. This means an average daily investment in training per employee, with all its implications (such as changes in

the quoting process, planning and running training programs). Software quality assurance stains our equation due to the deficient practices used by the local companies. Validating the quality of a software product with testing departments is poorly understood and applied. Contrary to the seemingly generalized beliefs of organizations that rushed to advertise their adoption of the ISO standards, software production has very little in common with milk or meat processing. Other means than those referred to in the ISO standards are in fact appropriate and absolutely necessary for this field and they will be addressed in the following sections of the article.

The versioning tool

Versioning the source code has been the status quo for some time in the Romanian IT, mainly due to the dynamics of the development teams and to client requirements. My only advice on this issue is to keep up with the trends, which means you should not use CVS when the hot tool is GIT.

Project documentation & issue tracking

Use integrated “full-feature” tools since they are really inexpensive and contribute

www.todaysoftmag.com | no. 25/July, 2014

15


programming Romanian IT quo vadis greatly to quality. Access to and control over the project status with such tools will increase your customer’s trust in the partnership.

Continuous integration

This is a “must” regardless of the programming language. There is no way to develop a software project without knowing the state of its code at any given time. To make this possible, indispensable tools like Jenkins must be used. At any change made in the code the developer gets notified if the automated tests have been run successfully and if the quality metrics lie within the limits agreed upon with the customer. By combining it with setup scripts, one can configure a live system where the client can access at any time the latest version of the product. I have worked in Germania at a prestigious customer that operated in the software manufacturing industry and relied only on compiling in IDE and manual archiving to produce a version of the product. Gradle, Maven, Ant or even a bash script were only a dream. It is pointless to say that such an approach cannot be accepted when talking about quality and even under other circumstances.

Automated testing

code with high complexity and low coverage by writing unit tests, through refactoring, etc. • Dependency analysis: do not allow circular references in the code, both at a package and at a file level.

Code review

Adhere to the solid principles1 and favor simplicity when doing code inspections.

How can we make sure the Romanian IT will not follow in the footpaths of the Indian one?

We can, for example, set an example in Cluj to inspire the rest of the Romanian IT community. For this to have a real echo we need a formal setting, which would represent a declaration of commitment to maintaining a high standard of performance and professionalism. Such a setting should be supported by the local software companies. Various types of associations are known, of which the Cluj IT cluster (http://www.clujit. ro) seems the most promising to me nowadays. I quote:

There is absolutely no reason to avoid the automated testing. Furthermore, this type of testing must come in all its shapes: unit About us testing, integration testing, regression automation (e.g. selenium). Cluj IT is a cluster association aiming to enhance the innovation I have heard countless reasons to not run automated tests, all capabilities and competitiveness of the Romanian IT sector unfounded. I was recently told it makes no sense to test the javascript since it is anyway covered by the manual testing. I insisted Seems audacious and maybe in this direction I imagine a on doing automated testing and was presented with the “it is very group of programmers forming a common commission for quadifficult to test” argument... difficult is never an argument. lity certification of the projects from the companies within the cluster. While this idea is definitely ambitious, implementing it Quality metrics would certainly have a great impact in the long run. I look forward One can choose from a wide variety of metrics, from the sim- to getting opinions from the readers. ple ones like those used to analyze style conventions to the really complicated metrics, which may analyze even the complexity of Happy coding. a block of code. Putting together a base package of metrics and adhering to it is an intricate part of any project setup. The most relevant are: • Code coverage: as a general rule, less than 74% is too little, over 85% is too much and conditional coverage requires special attention. • Code complexity versus coverage matrix: attack first the 1 http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29

Young spirit Mature organization A shared vision Join our journey! www.fortech.ro

16

no. 25/July, 2014 | www.todaysoftmag.com


programare

programming

Java 8, news and improvements

I

n the 23rd number of Today Software Magazine we started a discussion about what Java SE8 brings new. Almost unanimously, Java specialists state that the lambda expressions, as a general topic, but also the implications they bring about, represent the most important features of the current version. That is why I thought it useful for the first article to be on this particular topic. Silviu Dumitrescu

silviu.dumitrescu@accesa.eu Java Line Manager @ Accesa

The discussions in this article are carried out at a higher difficulty level, in order to highlight some aspects of performance, productivity and reducing the size of the written code. For the beginning, I come back to lambda expressions. Through the lambda expressions we can create anonymous methods. However, sometimes, the lambda expressions call methods that already have a name. In order to clarify things, I would like to go back to the example from the article mentioned in the beginning. We have a Product class with two features, name and price, getters and setters. We will also add to our project a POJO class, where there are different comparing methods, with signatures close to the compare() method of the Comparator interface: public class ProductComparisons { public int compareByName( Product a, Product b) { }

ProductComparisons myComparison = new ProductComparisons(); Arrays.sort(basket, (a,b)->myComparison. compareByName(a, b));

This syntax is possible because the result of the lambda expression is a class marked @ FunctionalInterface, in our case, Comparator. Instead of using lambda expressions, Java SE8 offers the possibility of using method references, through the domain operator. The syntax is thus much simplified. I go back to our example and I apply the previous operator. The syntax becomes: Arrays.sort(basket, myComparison::compareByName);

We have the following types of references: • To a method of an object (the previous example). • To a static method (the previous example can be easily customised). • To a method of an arbitrary object of a particular type:

return a.getName(). compareTo(b.getName());

public int compareByPrice( Product a, Product b) { return a.getPrice() b.getPrice(); } }

Arrays.sort(stringArray, String::compareToIgnoreCase)

In the test class we will create two Product objects, p1 and p2, which we will place in an array: Product[] basket = { p1, p2 };

We will sort the array by using the expressions:

• To a constructor: Supplier<Product> s = Product::new;

where the Supplier is from java.util.function.Supplier;

www.todaysoftmag.com | no. 25/July, 2014

17


programming Java 8, news and improvements Another topic I would like to bring to your attention is conThe inheritance process also supports this new behaviour. nected to interfaces. The interfaces used to contain, until this Therefore: version, only signatures of methods and constants. Java 8 proposes • The default methods which are not mentioned in the derived an approach to improve interface usability. If we add new signainterface inherit the default behaviour from the base interface. tures to the interface, then the classes implementing it should be • The default methods re-declared in the derived interface re-written. In order to avoid the process of writing again, the defabecome abstract. ult methods were introduced. Besides signatures and constants, • The default methods redefined in the derived interface will the interfaces will thus contain implementations, too. be used overwritten. I will provide a simple example, to highlight the new approaches: I will not conclude this article without describing some of the APIs used in the previous discussions. public interface MyInterface { void myClassicMethod();

default void myDefaultMethod() { System.out.println(“hello default!”); } static void myStaticMethod(){ System.out.println(“hello static!”); } }

Respectively the class implementing the interface: public class MyClass implements MyInterface { @Override public void myClassicMethod() { System.out.println(“hello classic!”); } }

As a test, we have: public static void main(String[] args) { MyInterface mine = new MyClass(); mine.myClassicMethod(); mine.myDefaultMethod(); MyInterface.myStaticMethod(); }

Thus, we refer to the three behaviours: • The predefined one, with the public abstract attribute (myClassicMethod). • The default implementation one. • The static default implementation one.

18

no. 25/July, 2014 | www.todaysoftmag.com

java.util.stream, which provides classes for operations on streams of elements. The streams are different from the collections in the following aspects: • the stream is not a data structure, but it transforms a data structure in an operation pipeline. • an operation on a stream produces an operation, but it does not modify the source. • the operations on streams are lazy implemented, which means they can expose optimisation opportunities. • the streams are basically infinite. There are also fault (short-circuit) operations that can produce exits/ outcomes in a finite time (limit(), findFirst()). • the streams are consumable, meaning that the elements are visited only once. For a new visit, one must create a new stream. • all the wrapper classes (Boolean, Integer, etc.) have been improved by methods using lambda expressions and references to methods. The discussions on Java SE8 will be continued in the future editions of TSM. Until then, have a pleasant reading and a beautiful summer!


management

programming

PostgreSQL’s JSON support

T

here is an undeniable increasing need for data flexibility and scalability, this being the reason why so many have considered NoSQL databases over the last years. However, there are pros and cons and furthermore non-relational databases were never meant to replace relational ones.

Raul Rene Lepsa raul.lepsa@3pillarglobal.com Java Devolper @ 3Pillar Global

Software developers and architects often have a hard time choosing one over another, especially when the format of the data is unknown or subject to change. A compromise solution is to use both relational and non-relational databases and to set up a communication system between the two. This, however, can prove to be a bigger headache than going with just one database system from the beginning.

Why not 2 in 1?

Companies such as IBM and Oracle started offering methods through which Relational Database Management Systems and non-relational ones can coexist. PostgreSQL offers an alternative, by providing special data types with a loose structure, which mimic NoSQL behavior on a RDBMS. Starting with version 8.3, PostgreSQL introduced the hstore data type, which came in useful for rows with many attributes that are rarely examined and semi-structured data. It allows storing key-value pairs (similar to some NoSQLs) and supports different operations and functions on them. In version 9.2 the JSON-type column was introduced and it got performance improvements and also support for different functions and operators

in the 9.3 beta. JSON (Javascript Object Notation) is a lightweight, human-readable, language-independent data-interchange format and it is stored by Postgres as text.

Why would you consider it?

Almost every developer has had to deal with ever-changing client requirements and felt the need for more flexibility from the data storage system, especially when dealing with multi-tenant applications. On the other hand, clients often feel the need for custom fields and demand flexibility. But what do you do when some clients want one custom field and others want four, for the same functionality? I am sure most of you have seen columns like customField1, customField2, customField3, anotherCustomField, and so on. These can be avoided by using an array, but what if you have to store pairs? Or triplets? What if the custom field has a label and a value? Or even an associated date? Things get more complicated. Another general issue is dealing with people’s names. There is a list of 40 falsehoods about names, and just to get a feel of the point I’m trying to make, here are a few things programmers generally omit thinking about: • people’s names can contain numbers

www.todaysoftmag.com | no. 25/July, 2014

19


programming PostgreSQL’s JSON support • people can have an indefinite number of names • people might not have a first name, a middle name or a last name • people can have more than one canonical fullname • people’s names are not necessarily written in ASCII, and not necessarily written in a single character set • people might not have names Of course, you probably won’t have to deal with many of these cases, but you can never be sure that your system will never have to store Chinese names. A traditional way of storing names is a combination of first, last and middle names, where the latter is optional. But if a person does not have a first or a last name, this implementation becomes erroneous. A possible way of doing it is using a JSON: {

“full_name”: “Ronaldo de Assis Moreira”, “first_name”: “Ronaldo”, “mother_name”:”de Assis”, “last_name”: “Moreira”, “nicknames”: [“Ronaldinho”, “Gaúcho”] }

The above implementation can however add the overhead of changing the full_name whenever one of the others is changed, but it’s just an example. Generally speaking, the solution is dependent mainly on the application and the targeted clients, but the above one offers great flexibility compared with classic relational database way of storing names. Rarely will we need to query all of these names, and that’s the

whole purpose. Why create a mother’s name column when most of the entities probably won’t even need one? Why create a nickname column when some entities will need more than one? Same can go for all the other names.

Functions and Operators

Although the full list of functions and operators can be found on the Postgres documentation, it is important to mention that accessing a JSON object field is done using the “->” operator (it can also be retrieved as text using the “->>” operator). For example, supposing the column is called names, accessing a last_name field from it is done by: names->>’last_name’. The above operator can also be used to access an array element at a certain index: (names->’nicknames’)->0 would return the first object from the nicknames array from within the names column. Apart from the operators, starting from version 9.3 there have been a lot of functions added in order to aid the developers using this data type, such as ones for extracting objects from an JSON array, for converting a row to a JSON, for expanding the JSON object into a set of key/value pairs or for safely converting any element to a JSON.

Advantages of the JSON column type

The first advantage of this column type is that JSON is an open standard format, being language-independent. Secondly, it makes it easy to deal with ever-changing requirements because it’s flexible and scalable. It comes in useful when storing multi-level object graphs, as it provides better performance and the code itself is

easier to write and maintain than usual graph implementations. The JSON column does not take much space (it’s like storing text) and allows storing up to 1 GB of data into one column. Furthermore, it prevents SQL injection by default because the JSON object is validated before it’s persisted. Foreign keys can be avoided by data denormalization and fields in complex queries can be accessed without having to join other (possibly large) tables, thus offering a NoSQL-like behavior in a relational database management system. It can be a plus in web applications by making it easier to transport and convert data from the client to the web controllers, and then to the persistence layer, by storing JSON objects received from the client. These types of columns can be indexed and furthermore indices can be created inside JSON objects (e.g. for arrays inside a JSON object). The current indices supported on both hstore and JSON types are almost as performant as indices on common types, but there is work in progress for the new generation of GIN indices. These have already been implemented for hstore in the 9.4 development version and performance comparisons with MongoDB have shown that although the sequential scan is almost the same in terms of performance, the index scan if faster than in Mongo. These indices will most probably be applied to the JSON data types as well for version 9.4.

Disadvantages

The most important disadvantage of the JSON data type is that it’s not portable, as it is currently only supported by PostgreSQL. Other disadvantages include the fact that Our core competencies include:

Product Strategy

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world. Our offerings are business focused, they drive real, tangible value.

www.3pillarglobal.com

20

no. 25/July, 2014 | www.todaysoftmag.com

Product Development

Product Support


TODAY SOFTWARE MAGAZINE foreign key references are not supported and that queries have an odd syntax, less human readable than ordinary SQL. Furthermore, queries that are simple on common data types can get tricky and difficult using the JSON one, especially when dealing with arrays of objects. To prove this point, consider an example in which phones are stored as an array of JSONs for a users table, as shown in Figure 1. Although this kind of setup looks better than creating columns such as home_number, primary_ number, work_number and so on, querying the table in order to retrieve the primary phone numbers for example is overly complicated (the result of the query is shown in Figure 2): SELECT users.id, phone->>’number’ AS number FROM users INNER JOIN ( SELECT id, json_array_elements(phones) AS phone FROM users WHERE id=users.id ) phones ON phones.id = users.id WHERE phone->>’type’=’primary’

Conclusions

As we often find ourselves searching for flexibility in databases, opting solely for a relational database management system or a NoSQL one (such as a document-based or graph-based) isn’t always a solution. Although steps are being taken in order to ease integrating these two types of databases, at least for now this can prove to be too big of a headache. PostgreSQL offers a “compromise” solution by providing support for hstore columns that mimic key-value stores and for JSON columns that allow storing JSON objects and arrays. Moreover, it provides operators and functions in order to ease the data manipulation, while also allowing indices and inner indices on such objects. Whether the solution is good or not is, of course, dependent on the system and on the requirements it needs to meet. What is important to remember is that the developer still has the power of the Postgres RDBMS available, independent on the fact that the JSON data type is used or not.

Figure 1 - Phones stored as an array of JSON objects, each having a type and a number

Figure 2 - Primary numbers as retrieved by the query

One cannot safely assume (generally speaking) that the primary phone number is on the first position of the array. Nevertheless, this is subjective and dependent on the system, so enforcing such constraints could lead to easier queries, as the “->” operator can be directly used to access the n-th element of an array. One more thing to notice is that the above example shows a simple array of phone numbers. Queries become even more complicated when dealing with more complex objects inside the arrays and when the need of joining with elements from within the array appears.

www.todaysoftmag.com | no. 25/July, 2014

21


programming

testare

GO and find the tool

M

y name is Marius Ciotlos and I’m a Delivery Manager at Betfair. Since I started working in the IT industry I’ve been in a lot of different roles from software development to network support and computer technical support and after so many years in the industry, I hardly get excited or impressed by a single tool, because so few are actually innovating.

Marius Ciotlos

marius.ciotlos@betfair.com Delivery Manager @ Betfair

Many tools out there are very similar with just different flavors, or so specific in their task that it is up to you to find how to build your workflow with them. The tool I found is called “GO” and it is being developed by ThoughtWorks. The tool is difficult to find because of the name (simple, elegant, however not even close to being unique). There is also the disadvantage that there is the “GO” programming language sharing the name!

What is this tool?

But enough with the introductions, let’s get down to the details behind this tool. GO was developed to be a tool with which you could do “Continuous Delivery”. I would like to avoid touching that subject because there are so many papers out there about this topic and every technical writer puts their own twist on the terminology. If you disconnect yourself from thinking about it like that, the tool becomes much more. It’s a very customizable workflow tool. And above all, it’s now open source. I have been introduced to GO, just a month ago in a workshop at my company. Some could think that we’ve been spoon fed biased information by some contractor that is paid or working with ThoughtWorks. However that is far from the truth. GO was at that time just a name of an obscure tool (for me), that maybe we should have a look at. I have been skeptical with “off the shelf ” complex tools as I’ve had my fair share of disappointments with some, that always advertise themselves as doing a lot that end up half of the time getting in your way.

22

no. 25/July, 2014 | www.todaysoftmag.com

Structure of GO: High level overview, looking at the tool from space

GO is a Server -> Agent type of architecture, where you have your main server with the UI, and many agents that connect to this Server and receive commands. Agents can run on windows, Linux and Mac OSX, and it’s up to you how many you want (you can even run more agents on one machine). On the server you have: • Tasks that run in sequence • Jobs that run in parallel and contain Tasks • Stages that run in sequence and contain Jobs • Pipelines that run in Parallel or Sequence depending on how they’re chained, that contain Stages Your agents will have tags to specify what they can do (for example run bash, run python, run GIT, run SSH) and your Jobs have tags too, to specify what resources you need for that Job to run. The server just matches the right agent for the right job!

What can you do with GO?

We use it for server deployments, running tests and running infrastructure automation. However, because of the way it’s built, you could use it for a lot of other purposes. The way it works: • You find some commands (tools or scripts or just console commands) that you run in a repetitive pattern (we’ll call them tasks) • You group those commands that can


TODAY SOFTWARE MAGAZINE be ran as a unit, let’s say create a user in AD, create his email, assign the user to a group (we’ll call this a job) • You group together jobs that can run in parallel, like after having the user in AD, creating different permissions based on the AD group in a lot of different tools (we’ll call this a stage) • You group together stages in a sequence, so you can do complex chained groups of commands. You could have a first stage doing all of the above, and a second stage preparing the installation of a laptop and a desktop in parallel (we’ll call this a pipeline) • You can even have pipelines chained together, or ran in parallel

Based on artifacts you can have some custom tabs that load HTML you generated with a command, to view directly in the tool.

The main features that allow you to achieve this high level of flexibility:

You can find more resources about the tool here: Public forum: https://groups.google.com/forum/#!forum/ go-cd Documentation: https://go.app.betfair/go/help/index.html Older community page: http://support.thoughtworks.com/ categories/20002778-Go-Community-Support Introduction webinar: http://www.go.cd/2014/03/10/gowebinar-recording.html

Custom Commands:

Custom commands are a list of commands that you can use to do stuff in a task. The nice thing in GO is that you can run literally anything using a terminal (Linux, Windows, Mac OSX). If the basic commands are not enough for you, it’s always possible to create more complex commands by yourself, using scripts (think of PowerShell, Python, etc.). Custom commands allow you to also track the status of the command that just ran. We have used Python scripts to have even more flexibility to process output and send back the right “status code” so you know when a task (which is running the command) has failed or not.

Configuration XML

The entire configuration for the server sits in one versioned XML file, and if the UI is not enough, you can always manually edit. This feature is only available to the Super Admin as it is very easily possible to screw things up from there. The Configuration XML sits in a text area that has a lot of validation, but one thing you will probably end up doing as you build your workflow is rename some Pipelines, and this can cause you problems.

Artifacts – Build and Test

Artifacts are the second great concept in GO, where a job (which has multiple tasks with custom commands) can generate some output and you could “tell” GO that it is an artifact. Even if you only have two types of artifacts allowed by GO, the Build Artifact is generic enough to be anything. With artifacts you can bring in the tool more information from the output, but you can also provide these to a different Stage or Pipeline for usage. Imagine a script that generates a temporary binary that you need further on down the pipeline. You could use artifacts so you no longer need an external repo for something that is so volatile. Also, the Test Artifacts are the way GO can interpret test results for a Stage.

www.todaysoftmag.com | no. 25/July, 2014

23


management

Scrum with Extreme Programming

M

anaging a software development team is a job that hasn’t got easier through the years. Since the Agile Manifesto, in 2001, many software development companies and teams have tested many of the Agile methods and techniques successfully. Also known as Extreme Project Management (XPM), this approach to project management mainly addresses the aspect of changing requirements. It will increase the level of adaptability in a development team, but it will decrease that of predictability. Alina Chereches

alina.cherecheș@yardi.com Senior Software Developer & Scrum Master @ Yardi

The lack of team predictability is, indeed, the downside of Agile methodology. This article talks about how Agile theory works in XPM and provides some, hopefully, helpful examples of how it can translate into successful projects for the team and profit for the organization.

The Agile Team

First of all, XPM is not intended for any team. As specified in the Manifesto, we’re talking about self-organized teams, formed of small numbers of senior developers, working on projects of low criticality, where requirements often change. That’s why we would need a team culture that responds to change - a communicating, cross-functional team. The roles in Agile teams are also important. An Agile team does not have the classical manager and its members have the ownership of their work. The Scrum Master is the team’s manager, and his role is that of a coach. He has organizational skills, obtains resources for the team and leads the team towards achieving its goals. Other roles in the Agile team would be: the Domain Expert, The Technical Expert, the Independent Tester, the Product Owner, the Stakeholder. As we are about to see later, each of these members will focus on different aspects of the project they are all involved in and this is the real challenge of the Agile Team. I’ve divided the aspects that Agile methods are intended to enforce in a team, into three categories: • Communication • Efficiency • Quality

working environment can be built to assure evolution and process optimization. And t hes e are t he Agile to ols: Iteration Planning, Poker Planning, Velocity Tracking, Sprint Retrospective. Team efficiency is ensured by the daily Scrum meetings (stand-ups) and the very short feedback loop that makes a sprint (working cycle) very adaptive. The Agile team will also be focused on quality, by practicing continuous integration, unit testing, pair-programing, code review and code refactoring, domain driven design and test-driven development.

The Agile dimensions

Agile methodology teaches us many things and approaches development from different points of view. I think one of the most important dimensions would be the Agile Undefined Process - the principle that according to which an Agile process or project is not at any point fully defined. It also refers to the concept of Agile Modeling: documentation, architecture and requirements that can change at any time and that must always be clear and transparent. It is also the principle that emphasizes the difference between Development Release and Production Release, or the difference between team velocity and meeting product release commitments. As I mentioned above, Agile is not about being predictable. Its focus is not procedures or artifacts, but methodologies for people. In Agile terminology these are called Crystal Methods: frequent release of usable code, reflective improvement, easy access to expert users, automated tests. Another important Agile dimension is Feature Driven Development. This As mentioned above, XPM works with presents best practices from client-value constant and productive team communica- functionality perspective: domain driven tion. Following Agile procedures, such a development, individual code ownership,

24

no. 25/July, 2014 | www.todaysoftmag.com


management regular releases and visibility of progress and results. And with these best practices we’re getting closer and closer to the other side of Agile: expenditure of resources and on-time delivery. In Agile, the management of time, quality and cost is called the Dynamic System Development Method. It talks about focusing on the business need, delivering without compromising quality, always demonstrating control by building incrementally and communicating continuously and clearly. It introduces the term prioritization (the musts, the shoulds, the coulds and the won’t haves). Even though I’ll be presenting the use of Scrum in XPM, there are the following guidelines from Kanban methodology that come to complete very well the point I’m trying to make in terms of delivering on time and controlling resources. And these guidelines could be summarized by the phrase: “Stop starting and start finishing”. Meaning that the team should agree to pursue incremental change and respect current process, roles, responsibilities and titles. So, I’ve mentioned that Agile doesn’t focus on procedures, but on people and methods for people. Well, there is this last Agile dimension that does concentrate on what’s being developed instead of how. And this is known as Lean Software Development - policies and procedures written for the purpose of controlling expenditure of resources. Some of these policies are: eliminate waste, decide as late as possible, deliver as fast as possible, see the whole picture, build integrity in (the perceived integrity of a system, by the client, is based on how it’s being advertised, delivered, deployed, accessed, on how intuitive it is, on its price and how well it solves problems).

TODAY SOFTWARE MAGAZINE requirements, while at the same time being able to measure the velocity of his team. The Product Owner, on the other hand, in not interested in the velocity of the team, nor in the quality of the code. He would like the team to be able to make accurate production release estimations. In other words, the Product Owner, as well as the Stakeholders, would like the team to be ... predictable. In order to be able to predict Production Release, first, a team must be able to predict Development Releases. For this we have the Scrum Project Management tools: poker planning, velocity tracking charts and sprint retrospectives. Now, let’s go through some concrete examples of how Scrum and XP could work together and help us reach the goal of predictability.

Scrum with XP - how to make it work?

Usually, you’ll find more articles talking about the differences between Scrum and XP. Even though they both focus on producing working software deployments in short time and emphasize the importance of frequent communication between the teams, these two seem to be defined as opposite approaches of developing systems. Here are some concrete differences: Knowing some of the most important differences between Scrum and XP, here are some concrete tools that merge the advantages of the two methodologies, so they can help us with the issue of unpredictability:

The Challenge

We’ll go back, now, to that point where I mentioned that different members of the Agile team would have different interests when building projects. Of course everyone is doing their best job and understands and respects the Agile Project Scope (what software to build and deliver). But, while the team members are interested in the Extreme Programming (XP) engineering meth- Documentation and testing ods and practices and writing quality code, the Scrum Master A Scrum team comprises all the people necessary in deliveris interested in keeping up with the unpredictability of system ing working software. This means developers, testers and business

Objective C

jobs-cluj@yardi.com Yardi Romania

www.todaysoftmag.com | no. 25/July, 2014

25


management Scrum with Extreme Programming analysts working toward a common goal. Even though we’ve embraced the Agile principles for development, release management and project planning are still done according to the Waterfall model. The Agile realities of enterprises have diverged from the original ideas described in the Manifesto and translated into a hybrid approach to application lifetime management called Water-Scrum-Fall. Water - requirements and specification (all the documentation needed, kept up to date) Scrum - design and implementation (engineering practices) Fall - verification, maintenance (automated testing, release and deployment) XPM is about creating qualitative, working deliverables that provide the highest possible business value while reducing the risk of failure. With Water-Scrum-Fall you are slowly making Agile change from team based Agile to enterprise Agile and driving the benefits of Agile into the organization. By embracing the benefits of both Agile and Waterfall development methodologies you take control of how the team connects with the other parts of the organization, incrementally reducing waste and increasing predictability in terms of estimations and delivery.

Velocity Tracking In Scrum, velocity is how much product backlog a team can handle in one sprint. This can be estimated by analyzing previous sprints, assuming that team composition and sprint duration are kept constant. Velocity reports are used in Sprint Planning meetings to define the following sprint. Once established, velocity will be used for release planning. Velocity measurements charts: • Burndown chart

sprint, another one is removed. This way the amount of story points assigned per sprint remains as planned. If this does not happen, and, while new tasks keep being added, none or less are being removed, the team may not be able to complete the sprint as planned, and thus be unable to meet its delivery commitments. The XPM Velocity Tracking equation: Planned (sp) + Added (sp) Removed (sp) = Assigned (sp) = Burned (sp)

Pairing and reviewing One important principle of XP is collective code ownership: ensuring that you have more than one pair of eyes looking over one piece of code. The purpose of this approach is not only to deliver quality code, with fewer bugs, but also to share the knowledge amongst all the team members and for developers to get the chance to learn good ideas from their colleagues. In order to make sure that the cost of reviewing code does not exceed the benefits, here’s an idea of what would work and when. First, you may be familiar with the phases of an Agile Team: Forming, Storming, Norming and Performing. Depending on where the team is at the moment, some code review approaches apply more than others. During the Forming period it is indicated that team members pair and work together on projects. This way, juniors can work with seniors, members with more experience on the project can share the knowledge and the team gets more productive than when individuals work alone. When the team is in the Norming phase, code review sessions with the whole team are suited. Now’s when the team is setting its coding guidelines and standards and some major collective decisions have to be made. Finally, if the team is mature and in Performing mode, working individually would be more efficient than having two people working on the same piece of code. Peer-review is the code evaluation that should still be practiced in this case.

Start doing it!

• Velocity chart

With XP, sprints are flexible and new stories may be added during the sprint. This makes velocity tracking more difficult and Burndown charts almost impossible to keep. What should happen is that, when a new task is pushed during the

26

no. 25/July, 2014 | www.todaysoftmag.com

The last, but also the most important rule of XPM is making sure we put into practice as much as possible, as soon as possible. Every team meeting’s output should finalize with a set of conclusions and action items and no sprint retrospective should set new goals for the team as long as previous ones haven’t been reached. We want to keep the team aware, responsible and focused.


architecture

TODAY SOFTWARE MAGAZINE

Agile Design Principles

A

gile programming is based on developing the software product in small incremental blocks, when the client’s requests and the solutions offered by the programmer progress simultaneously. Agile programming is based on a close relation between the final quality of the product and the frequent deliveries of incrementally developed functionalities. The more deliveries are carried out, the higher the quality of the final product. In an agile implementation process, the modification requests are regarded as positive, no matter the development stage of the project. This is due to the fact that the modification requirements prove the fact that the team has understood what is necessary for the software product to comply with the necessities of the market. For this reason, it is necessary for an agile team to maintain the code structure as flexible as possible, so that the new requirements of the clients have the smallest impact possible on the existing architecture. However, this doesn’t mean that the team will make an extra effort to take into consideration the future requirements and necessities of the clients, nor that it will spend more time to implement an infrastructure which might support possible requirements necessary in the future. Instead, this means that they will focus on developing the current product as well as possible. With this purpose in view, we shall investigate some of the software design principles necessary to be applied by an agile programmer from one iteration to another, in order to maintain the project’s code and design as clean and flexible as possible. These principles were suggested by Robert Martin in his book called “Agile Software Development: Principles, Patterns and Practices (1)”.

of the class. This correlation leads to a fragile design. Fragility means that a modification of the system leads to a break in design, in places that have no conceptual connection to the part which has been modified. Example: Suppose we have a class which encapsulates the concept of phone and the associated functionalities. class Phone { public void Dial(const string& phoneNumber); public void Hangup(); public void Send(const string& message); public Receive(const string& message); };

This class might be considered reasonable. All the four methods defined represent functionalities related to the phone concept. However, this class has two responsibilities. The methods Dial and Hang-up are responsible for performing the connection, while the methods send and Receive are responsible for data transmission. In the case where the signature of the methods responsible for performing the connection would be subjected to changes, this design would be rigid, since all the classes which call the Dial and Hangup methods would have to be recompiled. In Single Responsibility Principle: SRP order to avoid this situation, a re-design is A class should have only one reason to necessary, to divide the two responsibilities. change. In the SRP context, responsibility can be defined as “a reason to change”. When the requirements of the project modify, the modifications will be visible through the alteration of the responsibilities of the classes. If a class has several responsibilities, then, it will have more reasons to change. Having more coupled responsibilities, the modifications on a responsibility will imply Figure 1 modifications on the other responsibilities

In this example, the two responsibilities are separated, so that the class that uses them – Phone, does not have to couple the two of them. The changes of the connection will not affect the methods responsible with data transmission. On the other hand, in the case where the two responsibilities do not show reasons for modification in time, their separation is not necessary either. In other words, the responsibilities of a class should be separated only if there are real chances that the responsibilities would produce modifications, mutually influencing each other.

Conclusion The Single-Responsibility Principle is one of the simplest of the principles but one of the most difficult to get right. Finding and separating those responsibilities is much of what software design is really about. In the rest of the principles of agile software design we will analyse further on, we will come back to this issue in one way or another.

Open Closed Principle: OCP

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. When a single modification on a software module results in the necessity to modify a series of other modules, the design suffers from rigidity. The OCP principle advocates design refactoring so that further modifications of the same type will no longer produce modifications on the existing code, which already functions, instead it will only require adding new modules. A software module observing the Open-Closed principle has two main characteristics: • “Open for extensions.” This means that the behaviour of the code can be extended. When the

www.todaysoftmag.com | no. 25/July, 2014

27


architecture Agile Design Principles the fact that abstract classes are more closely Conformance to this open-closed prinassociated to their clients than to the classes ciple is costly. It requires time to develop that implement them. and effort to create the necessary abstractions. These abstractions increment the The Open-Closed principle is also used complexity of the software design. in the Strategy and Plugin design patterns In exchange, the Open/Closed Principle (3). For instance, Fig.4 presents the cor- is, in many ways, at the heart of objectresponding design, which observes the oriented programming. Conformance to open-closed principle. this principle is what yields the greatest benefits claimed for object-oriented techAbstraction is the method which nology: code flexibility, reusability and allows the modification of the behaviour of maintainability. a software module, without modifying its already existing code. In C++, Java or any The Liskov Substitution Principle (LSP) other object oriented language, it is posSubtypes must be substitutable for sible to create an abstraction which offers a their base types. fixed interface and an unlimited number of implementations, namely different behavFigure 4. In languages such as C++ or Java, iours (2). the main mechanism through which The Sort_Object class performs a func- abstraction and polymorphism is done is Fig. 2 presents a block of classes that tion of sorting objects, function which can inheritance. In order to create a correct do not conform to the open-closed prin- be described in the abstract interface Sort_ inheritance hierarchy we must make sure ciple. Both the Client class and the Server Object_Interface. The classes derived from that the derived classes extend, without class are concrete. The Client class uses the the abstract class Sort_Object_Interface replacing, the functionality of the base Server class. If we want for a Client object are forced to implement the method Sort_ classes. In other words, the functions using to use a different server object, the Client Function(), but, at the same time, they pointers or references to the base classes class must be changed to name the new have the freedom to offer any implementa- should be able to use instances of the server class. tion for this interface. Thus, the behaviour derived classes without being aware of this. specified in the interface of the method Contrary, the new classes may produce void Sort_Function(), can be extended and undesired outcomes when they are used modified by creating new subtypes of the in the entities of the already existing proabstract class Sort_Object_Interface. gram. The importance of the LSP principle Figure 2. Example which does not In the definition of the class Sort_ becomes obvious the moment it is violated. comply with the OCP 1 principle Object we will have the following methods: Example: Suppose we have a Shape class, whose Sort_Object::Sort_Function() In Fig.3, the same design as the one void objects are already used somewhere in { in Fig.2 is presented, but this time the m_sort_algorithmthe application and which has a SetSize open-closed principle is observed. In this >sortFunction(); method, containing the mSize property } case, the abstract class AbstractServer was void Sort_Object:: which can be used as a side or diameter, introduced, and the Client class uses this Set_Sort_Algorithm(const depending on the represented figure. Sort_Object_Interface* void Sort_Object::Sort_Function() abstraction. However, the Client class will sort_algorithm) { { actually use the Server class which implem_sort_algorithmstd::cout << „Setting a new sort- >sortFunction(); ments the ClientInterface class. If, in the ing algorithm...” << std::endl; } future, one wishes to use another type m_sort_algorithm = void Sort_Object:: Set_Sort_Algorithm(const of server, all that needs to be done is to sort_algorithm; Sort_Object_Interface* } implement a new class derived from the sort_algorithm) { ClientInterface class, but this time the cli- Conclusions std::cout << „Setting a new sortent doesn’t need to be modified. The main mechanisms behind this ing algorithm...” << std::endl; principle are abstraction and polymor- m_sort_algorithm = phism. Whenever the code has to be sort_algorithm; modified in order to implement some } new functionality, one must also take into consideration the creation of an abstraction which can provide an interface for the desired behaviour and offer at the same Figure 3. Example observing the OCP 1 principle time the possibility to add new behaviours for the same interface in the future. Of A particular aspect in this example course, the creation of an abstraction is not Figure 5 is the way we named the abstract class always necessary. This method is generally ClientInterface and not ServerInterface, useful where there are frequent changes to Later on, we will extend the application for example. The reason for this choice is be made. by adding the Square and Circle classes. requirements of the project are modified, the code can be extended by implementing the new requirements, meaning that one can modify the behaviour of the already existing module. • “Closed for modifications.” The implementation of the new requirements does not need modifications on the already existing code.

28

no. 25/July, 2014 | www.todaysoftmag.com


TODAY SOFTWARE MAGAZINE Taking into consideration the fact that the inheritance models an IS_A relationship, the new Square and Circle classes can be derived from the Shape class. Let’s suppose hereafter that the Shape objects are returned by a factory method, based on some conditions established at run time, so that we do not know exactly the type of the returned object. But we do know it is Shape. We get the Shape object, we set its size property to 10 units and we compute its surface. For a Square object, the area will be 100. void f(Shape& shape, float& area) { shape.SetSize(10); shape.GetArea(area); assert(area == 100); // Oups! // for circle area = 314.15927! }

In this example, when the f function gets r as a parameter, an instance of the Circle class will have a wrong behaviour. Since, in function f, the Square type objects cannot substitute the Rectangle type objects, the LSP principle is violated. The f function is fragile in relation to the Square/Circle hierarchy.

Design by Contract Many developers may feel uncomfortable with the notion of behavior that is “reasonably assumed.� How could you know what our users/ clients will really expect from the classes we are implementing? To our help comes the design by contract technique (DBC). The contract of a method informs the author of a class about the behaviors that he can safely rely on. The contract is specified by declaring preconditions and postconditions for each method. The preconditions must be true in order for the method to execute. On completion, after executing the method, it guarantees that the postconditions are true. Certain languages, such as Eiffel, have direct support for preconditions and postconditions. They only have to be declared, and during runtime they are automatically verified. In C++ or Java, this functionality is missing. Contracts can instead be specified by writing unit tests. By thoroughly testing the behavior of a class, the unit tests make the behavior of the class clear. Authors of client code will want to review the unit tests in order to know what to reasonably assume about the classes they are using.

Conclusions

Figure 7 presents the same class diagram as in Fig.6, but this time the dependency inversion principle is observed. Thus, to each level accessing the functionality of a lower level, we added an interface which will be implemented by the lower level. This way, the interface through which the two levels communicate is defined in the higher hierarchical level, so that the dependency Dependency Inversion Principle (DIP) was reversed, namely the low level depends on the high level. Modifications carried out A. High-level modules should not on the low levels no longer affect the high depend on low-level modules. Both levels, but it happens backwards. In conclushould depend on abstract modules. sion, the class diagram in Fig. 7 complies B. Abstractions should not depend with the dependency inversion principle. upon details. Details should depend upon abstractions. The LSP principle is a mere extension of the Open-Closed principle and it means that, when we add a new class derived in an inheritance hierarchy, we must make sure that the newly added class extends the behaviour of the base class, without modifying it.

This principle enunciates the fact that the high-level modules must be independent of those on lower levels. This decoupling is done by introducing an abstraction level between the classes forming a high hierarchy level and those forming lower hierarchy levels. In addition, the principle states that the abstraction should not depend upon details, but the details Figure 7 should depend upon the abstraction. This principle is very important for the reusing of software components. Moreover, the Conclusions correct implementation of this principle Procedural traditional programming makes it much easier to maintain the code. creates dependency policies where high level modules depend on the details of Fig. 6 presents a diagram of classes low level modules. This programming organised on three levels. Thus, the method is inefficient, since modifications PolicyLayer class represents the high-level of the details lead to modifications in the layer and it accesses the functionality in the high level modules also. Object oriented MechanismLayer class, situated on a lower programming reverses this dependency level. In its turn, the MechanismLayer mechanism, so that both the details and class accesses the functionality in the the high levels depend upon abstractions, UtilityLayer class, which is also on a low- and the services often belong to the clients. level layer. In conclusion, it is obvious No matter the programming language that, in this class diagram, the high-levels used, if the dependencies are inverted, depend upon the low-levels. This means then the code design is object oriented. If that, if there is a modification on one of the dependencies are not inversed, then the low levels, chances are rather high the design is procedural. The dependency that the modification propagates upwards, inversion principle represents the fundatowards the high level layers, which means mental low level mechanism at the origin that the more abstract higher levels depend of many benefits offered by the object orion the more concrete lower levels. So, the ented programming. Complying with this Dependency Inversion principle is violated. principle is fundamental for the creation of reusable modules. It is also essential for writing code that can stand modifications. As long as the abstractions and the details are mutually isolated, the code is much easier to maintain.

The Interface Segregation Principle (ISP)

Clients should not depend on interfaces they do not use.

Figure 6

www.todaysoftmag.com | no. 25/July, 2014

29


architecture Agile Design Principles This principle stresses the fact that when an interface is being defined, one must be careful to put only those methods which are specific to the client in the interface. If in an interface one adds methods which do not belong there, then the classes implementing the interface will have to implement those methods, too. For instance, if we consider the interface Employee, which has the method Eat, then all the classes implementing this interface will also have to implement the Eat method. However, what happens if the Employee is a robot? Interfaces containing unspecific methods are called “polluted” or “fat” interfaces. Figure 8 presents a class diagram containing: the TimerClient interface, the Door interface and the TimedDoor class. The TimerClient interface should be implemented by any class that needs to intercept events generated by a Timer. The Door interface should be implemented by any class that implements a door. Taking into consideration that we needed to model a door that closes automatically after a period of time, Fig.3.7 presents a solution in which we have introduced the TimedDoor class derived from the Door interface, and in order to also dispose of the functionality from TimerClient, the Door interface was also modified so as to inherit the TimerClient interface. However, this solution pollutes the Door interface, since all the classes that will inherit this interface will have to implement the TimerClient functionality (4), too. class Timer { public: void Register(int timeout, TimerClient* client); } ; class TimerClient { public: virtual void TimeOut(); }; class Door { public: virtual void Lock() = 0; virtual void Unlock() = 0; virtual bool IsDoorOpen() = 0; } ;

The separation of interfaces can be done through the mechanism of multiple inheritance. In Fig. 9, we can see how multiple inheritance can be used to comply with the Interface Segregation principle in design. In this model, the TimeDoor interface inherits from both Door and TimerClient interfaces.

30

For further reading on the enunciated principles, I highly recommend Robert C. Martin’s book, “Agile Software Development – Principles, Patterns and Practices”.

Bibliography

Figure 8

[1] Robert Martin. Agile Software Development: Principles, Patterns, and Practices. Editura Prentice Hall. 2012. [2] Gamma, et al. Design patterns. read i n g M a : A d d i s o n - We s l e y, 1 9 9 8 . [3] Liskov, Barbara. Data Abstraction and Hierarchy. SIGPLAN Notices, 23.5 (May 1988)

Figure 9

Conclusions Polluted or fat classes cause harmful couplings between their clients. When one client forces a change on the fat class, all the other clients of the polluted class are affected. Thus, clients should have to depend only on methods that they actually call. This can be achieved by breaking the interface of the fat class into many clientspecific interfaces. Each client-specific interface declares only those functions that its particular client or client group invoke. The fat class can then inherit all the clientspecific interfaces and implement them. This breaks the dependence of the clients on methods that they don’t invoke and allows the clients to be independent of one another.

[ 4 ] M e y e r, B e r t r a n d . O b j e c t o r i e n ted software construction, 2nd ed. upper S a d d l e R i v e r, Nj : P re nt i c e Ha l l , 1 9 9 7 .

Agile Design Conclusions

By repeatedly applying – from one iteration to another- the above mentioned principles, we avoid the three characteristics that define a poor quality software architecture: rigidity – the system is difficult to modify because each modification affects too many parts of the system; fragility – if something is modified, all sorts of unexpected errors occur and immobility – it is difficult to reuse parts of the application, since they cannot be separated from the application they were initially developed for. Agile design represents a process based on the continuous application of the agile principles and the design patterns, so that the design of the application constantly remains simple, clean and as expressive as possible.

no. 25/July, 2014 | www.todaysoftmag.com

Dumitrița Munteanu

dumitrita.munteanu@arobs.com Software engineer @ Arobs


programming

TODAY SOFTWARE MAGAZINE

iOS 7 blur Intro in flat design

I

n the last couple of years, the whole mobile world, and we’re talking big names: Android, iOS, Windows, adopted the concept of flat design. The purpose is the same: offer a better experience to the user through the use of clean understandable user interfaces. idea of “always know where you are” for a better access to different types of data. One of the best examples is the Notification Center, which overlays the Home Screen to show you missed calls and the other info. By doing this, the user didn’t have to leave the Home Screen (having it blurred in the background) and also focuses better on the newly displayed data.

How does Apple do it

To achieve the Notification Center blur, Apple works with the iPhone’s GPU, this showing up in the responsiveness of the animations and data displaying. The fact that they can have video or moving components in the blurred background, called “live blur”, is also thanked to this. Unfortunately, Apple doesn’t provide any API for this in iOS7 SDK, probably for some security reasons. So, if you want to do it yourself, it will take a lot of time to write code to handle the GPU, and to be honest you probably only want to show an alert.

Solution

With iOS 7, Apple introduced flat design to its operating The normal thing to ask is, how can we create this in our own system along with some new user experience concepts like the apps then? There are several ways to do it, but the one I’ll present in this article is very easy to implement, using a class provided by Apple at WWDC 2013, called “UIImage+ImageEffects.h”. This implies to capture the current screen, blur the image and set it as background image to the new screen, alert or whatever you are displaying. We’ll start off by creating a custom view controller based on UIViewController, in which we’ll import the earlier mentioned class. #import “UIImage+ImageEffects.h”

In one of the life cycle methods of the controller we add the following and we already have the easiest way to obtain a blur effect: -(void)viewWillAppear:(BOOL)animated { UIImage *snapshot = [self takeScreenSnapshot]; UIColor *tintColor = [UIColor colorWithWhite:0.2 alpha:0.15]; www.todaysoftmag.com | no. 25/July, 2014

31


programming iOS 7 blur self.view.backgroundImage = [snapshot applyBlurWithRadius:8 tintColor: tintColor saturationDeltaFactor: 1.8 maskImage:nil]; } - (UIImage *)takeScreenSnapshot { UIGraphicsBeginImageContext(self.bounds.size); if([self respondsToSelector:@selector( drawViewHierarchyInRect:afterScreenUpdates:)]){ [self drawViewHierarchyInRect: self.bounds afterScreenUpdates:NO]; } else{ [self.layer renderInContext: UIGraphicsGetCurrentContext()]; } UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); NSData *imageData = UIImageJPEGRepresentation(image, 0.75); image = [UIImage imageWithData:imageData]; return image; }

1. applyBlurWithRadius is the number of pixels taken into account for calculating the blur. The bigger the number, the bigger the blur. 2. tintColor is the tint set to the blurred image. In my example, i used a darker tint similar to the one in Notification Center. 3. saturationDeltaFactor is the image saturation level. 4. maskImage though is nil, it is used to mask portions of the images, so you have only partially blurred images like below:

As you can see through the newly added properties and methods I added applyDarkEffectWithTent which switches from the default box filter to a faster tent filter algorithm. This one is faster and more powerful than the box filter, that’s why we only need one passing through it, not 3 like in the box algorithm case, obtaining the blurred image even faster. - (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius blurType: (BlurType) blurType tintColor:(UIColor *) tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage { … if (blurType == BOXFILTER) { vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend); vImageBoxConvolve_ARGB8888(&effectOutBuffer, &effectInBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend); vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend); } else { vImageTentConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend); } … }

By doing this we reduce the runtime from 184 ms to a low 16 ms. After all, we want to give our users a seamless experience while using our app, just like in Notification Center. I’ll finish by suggesting that we use this blur technique only when we think blur is needed to enhance the user’s experience and not on every corner of the app, making it an overloaded frozen blur.

Going deeper

For those of you who want better performance or just another type of blur we can have a look inside “UIImage+ImageEffects.h” and have a few things changed. typedef enum { NOBLUR, BOXFILTER, TENTFILTER } BlurType; @import UIKit; @interface UIImage (ImageEffects) - (UIImage - (UIImage - (UIImage - (UIImage dius; - (UIImage tintColor;

*) *) *) *)

applyLightEffect; applyExtraLightEffect; applyDarkEffect; applyDarkEffectWithTent: (CGFloat) ra-

*) applyTintEffectWithColor:(UIColor *)

- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius blurType: (BlurType) blurType tintColor:(UIColor *) tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage; @end

32

no. 25/July, 2014 | www.todaysoftmag.com

Mihai Fischer

mihai.fischer@gmail.com iOS developer @ Dens.io


programming

TODAY SOFTWARE MAGAZINE

Data Integration with Talend Open Studio

A

s Jonathan Bowen mentioned in his book “Getting Started with Talend Open Studio for Data Integration” as soon as the second computer was manufactured, systems’ integration became an essential part of IT teams’ work.

Today’s systems complexity together with the fast pace that the businesses evolve at highlight the need of a set of tools that allows us to quickly execute the integration tasks. Also, we have to be able to promptly react in front of new business opportunities. Experience showed us that, most of the time, new clients come asking to integrate the product we are offering into their ecosystem. Rarely does an informational system work isolated in its own universe. We noticed on several occasions that the success of the project presented to a client depended on our ability to integrate the system with the products they were already using. The process we are talking about could mean the synchronization of two databases once or recurrently, the consumption of some services – web services or other kinds of them – the generation and transfer of various types of files etc. Thus, we notice that we deal with a variety of ways to accomplish these tasks, a thing that contributes to the increase of the problem’s complexity. Also, sometimes it is our responsibility to decide the way we will accomplish the integration but most of the time the client has specific requirements regarding this aspect as well. When we deal with such a situation we can either manually build the interface between the systems as a custom solution or use a tool specialized on solving integration problems. Such a tool is Talend Open Studio which comes with an interesting offer to help us solve our integration tasks.

Having in mind the fact that this is a graphical tool, the product can be used both by programmers and persons that don’t have programming skills. Yet in order to be able to define certain complex behavior we have to write Java code from time to time. This fact leads us to the conclusion that the users who don’t know programming face certain limitations. Talend Open Studio is relatively easy to use; it’s a quick way to model integration scenarios, most of the time reducing the implementation time from weeks or months to days or even hours, depending on the complexity of the project. However, we have to warn the readers that, alike many other areas, if due to overzealousness or an unfit design we over-engineer things, we risk to get a complex solution, hard to understand for other users or even inefficient. There is the need to follow some best practices that ensure the quality of our solution here as well. Among other advantages of using Talend Open Studio we have to notice that this is an open source product that allows the users to extend the platform as needed. Also, using it boosts the productivity because the developers can concentrate more on the definition of the process than on the technical implementation thereof. We have at our disposal a multitude of components, fit for situations more or less common, that we can operate with to define our processes. In addition, the Talend users’ community is active and ready to offer technical advice.

An Overview of the Talend Open Studio Environment

Use Cases

Talend Open Studio for Data Integration is a graphical development environment that, as its name states, is specialized in data integration between systems. At the core of this open source system stays the Eclipse environment. Together with the creation of integration solutions, Talend Open Studio includes also the necessary mechanisms for delivering them – the jobs can be run both within the environment and as stand-alone scripts. For modelling the processes, the system uses connectors. The developers of the product offer us over 800 such connectors which give us the possibility to easily connect databases, read information from various sources, transfer files and perform operations on them. Also, we are given the possibility to connect specialized components for defining complex integration processes. A good portion of the work we perform with Talend Open Studio is represented by graphical modelling of the processes we want to define. All this time the platform does its builder work in background, generating Java code. Actually every component we use has an associated behavior, described by the Java code.

As we mentioned in the previous section the most common use cases of the Talend Open Studio project are these: Transfer between databases: When new systems are created or the existing ones are upgraded, the data needs to be migrated in a new database. This can have the same schema or a different one and Talend Open Studio offers us the connectors and actions necessary for this process. Files transfer: The integration tasks may need to transfer data in large quantities. This thing is often performed using files. An example of such a file is the classic CSV (comma separated values). Also, it is possible that the system which receives the transfer file needs the data in a different format. This case is also handled by the Studio because it offers us the possibility to define processes that perform transformations on the transferred data. Moreover we have at our disposal file management capabilities through operations such as FTP transfers or archiving. Synchronization: The systems that collaborate are not always connected to the same data repository, which means www.todaysoftmag.com | no. 25/July, 2014

33


programming Data Integration with Talend Open Studio certain information may be duplicated within an ecosystem. Consequently, we need to make sure that this information is periodically synchronized. This is the case of the data about the clients of a company which can be present, for instance, within the finance system, the distribution system or the CRM platform. Talend Open Studio can be used for performing the systems’ synchronization with the help of some jobs that automate the process. ETL: This is an acronym for Extract, Transform, Load, terms that describe an essential process for data warehouse systems. Such a process extracts data from operational systems, transforms it applying rules or functions and then loads it into the data warehouse. Again Talend Open Studio makes our lives easier helping us substantially at implementing this type of process.

Conclusion

In this article we briefly saw which the context for the integration processes is, what Talend Open Studio is and which the benefits of using this product are. Also, by means of a small example, we tried to illustrate the simplicity of using the Studio, the quickness of implementing the jobs and to get an idea about the potential of this platform.

Example

To illustrate how easy is to use this platform we create a project containing one job that transforms an XML file into a CSV file. The graphical model for this job is illustrated in the figure below.

On the left hand we have a tFileInputXML component and on the right hand a tFileOutputDelimited component. They are connected through a Main connector. Before dragging the input component inside the design area we defined a metadata object which we associated an XML file to. The Studio automatically detected the schema of the document and offered us the possibility to select which nodes to be transferred to output. Through the Main connector, Talend transferred to the output file the exact structure we defined without us writing a single line of code. All we had to configure within the output component was the path and the CSV file name. Of course we can extend this job by further connecting other components such as the ones that work with FTP connections to transfer our file to the target system.

34

no. 25/July, 2014 | www.todaysoftmag.com

Dănuț Chindriș

danut.chindris@elektrobit.com Java Developer @ Elektrobit Automotive


TODAY SOFTWARE MAGAZINE

programming

Securing Opensource Code via Static Analysis (II)

A

s previously discussed, in this paper, we run Klocwork Insight against Linux kernel (version 2.6.32.9) and we discuss the results of our analysis. Klocwork Insight version used for this analysis was 9.2.0.6223. Figure 3 shows the Klocwork checkers we have used for analyzing C/C++ source code. These are actually ‘checker families’ or ‘categories’ as each of these tree items (in figure 3) contains a number of individual checkers. These checkers were enabled on Klocwork for our analysis to identify all significant issues in the source code being analyzed. The project metrics reported by Klocwork after analysis of the Linux kernel (2.6.32.9) code, are as shown in table.1. TABLE I. PROJECT METRICS REPORTED FOR SCA OF LINUX KERNEL

Figure 3. Klocwork checks for C/C++ code

In the following two sections, we discuss vulnerability analysis and complexity analysis of the Linux kernel after performing SCA on the Linux kernel code.

Vulnerability analysis

Certain Common Vulnerabilities and Exposures (CVE) identifiers for publicly-known information security vulnerabilities for numerous Linux kernel versions including version 2.6.32.9 are presented in table.2. The vulnerabilities listed in table.2 are not comprehensive by any means and are a subset of the vulnerabilities

published in the National Vulnerability Database (NVD), after the release of the Linux kernel version 2.6.32.9. The release of Linux kernel version 2.6.32.9 was announced in February 2010. The vulnerabilities listed in table.2 were published in the NVD between February 2010 and July 2011, which is our sampling period. NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics[13]. By performing SCA on the previously mentioned version of Linux kernel, we were able to identify the vulnerabilities shown in table.2. Further, these vulnerabilities that were identified using SCA account for about 10% of all the Linux kernel (v2.6.32.9) vulnerabilities reported between February 2010 (i.e., release of Linux kernel v2.6.32.9) and July 2011 (i.e., the end of our sampling period under consideration) in the NVD. Although, not all vulnerabilities published in the NVD that corresponded to the Linux kernel 2.6.32.9 could be detected by static analysis alone, the 10% issues that SCA was able to identify, as shown in table.2, include significant number of security and quality issues. Of this 10% vulnerabilities, about 22.3% were rated ‘High’, 44.3 % were rated ‘Medium’ and 33.3% were rated ‘Low’ on the Common Vulnerability Scoring System (CVSS). This exercise successfully indicates that certain vulnerabilities (as shown in table.2) in the Linux kernel (includes version 2.6.32.9) published as recently as July 2011 in the NVD, could have been detected much earlier, had diligent static analysis and review of the kernel been performed earlier. Another important factor to consider is the types of vulnerabilities that can be identified by SCA. The types of vulnerabilities that SCA was able to detect in our experiment include buffer overflows, integer overflows/underflows, integer signedness error and improper memory initialization. These are some of the vulnerabilities that have been frequently exploited to launch malicious attacks on various software applications. For example, buffer overflow anomalies have historically been exploited by computer worms like the Morris worm (1988) or the more recent Conficker worm (2008). It is well worth the effort to identify such bugs early on while incorporating opensource code along with proprietary code since our experiment demonstrates that SCA has the capability to detect a significant number of such bugs, and is a strong motivating factor to perform SCA on the adopted opensource code. Beyond being able to detect these previously known vulnerabilities, the SCA tool was able to flag certain critical issues in www.todaysoftmag.com | no. 25/July, 2014

35


programming Securing Opensource Code via Static Analysis (II) the code, which may require further investigation to ascertain their genuineness and exploitability. Although the exploitability of some of these vulnerabilities cannot be justified by static analysis alone, it is well within the interests of any software vendor to address these issues before the software is released to the market. This establishes the value of performing SCA on opensource code to identify and fix certain coding issues early on, as opposed to waiting for the opensource community to identify and report these issues, since, sooner a vulnerability or bug can be detected the cheaper it is to fix. TABLE II. LINUX KERNEL VULNERABILITIES DETECTED BY SCA

of course, deals with networking capabilities, is one of the most frequently, exploited components of the Linux kernel. This is consistent with the “attractive proposition” of launching remote attacks, and therefore, the network stack is a common target for exploitation which may account for the higher number of vulnerabilities being reported.

Figure 4. Number of vulnerabilities per significant category in Linux kernel (v2.6) published in NVD (years 2010-2011)

Complexity analysis

In order to analyze which components of the Linux kernel have higher incidences of vulnerabilities being reported, we break up the kernel files into six significant categories, which are as follows [12]: • Core: This includes the files in the init, block, ipc, kernel, lib, mm, and virt subdirectories. • Drivers: This includes the files in the crypto, drivers, sound, security, include/acpi, include/crypto, include/drm, include/media, include/mtd, include/pcmcia, include/rdma, include/rxrpc, include/scsi, include/sound, and include/video subdirectories. • Filesystems: This includes the files in the fs subdirectory. • Networking: This includes the files in the net and include/ net subdirectories. • Architecture-specific: This includes the files in the arch, include/xen, include/math-emu, and include/asm-generic subdirectories. • Miscellaneous: This includes all of the rest of the files not included in the above categories. A quick look through NVD shows that most of published vulnerabilities (years 2010 and 2011) for Linux kernel occur in ‘networking’ components of the kernel, followed by ‘drivers’ and ‘filesystems’ components as seen in figure 4. This can be attributed to the fact that the network stack, which,

36

no. 25/July, 2014 | www.todaysoftmag.com

Usually, SCA tools can calculate the complexity metric for the programs that they analyze. In a nutshell, the complexity metric measures the number of decisions there are in a program i.e., it directly measures the number of linearly independent paths through a program’s source code. The more possible decisions made at runtime the more possible data paths. The National Institute of Standards and Technology (NIST) recommends that programmers should count the complexity of the modules they are developing, and split them into smaller modules whenever the cyclomatic complexity of the module exceeds 10, but in some circumstances it may be appropriate to relax the restriction and permit modules with a complexity as high as 15, if a written explanation of why the limit was exceeded is provided [9]. A high complexity metric makes it virtually impossible for a human coder to keep track of all the possible paths, hence when the code is modified or new code is added, it is highly probable that the coder will introduce a new bug. If the program’s cyclomatic complexity is greater than 50, such a program is considered as un-testable and a very high risk program. Studies show a correlation between a program’s cyclomatic complexity and its maintainability and testability, implying that with files of higher complexity there is a higher probability of errors when fixing, enhancing, or refactoring source code. In an opensource project similar to Linux, most of the development effort is communicated through mailing lists. The developers are spread across the globe and have varying levels of software development skills. Therefore, this situation may present a challenging task for any central entity responsible for coordinating the development efforts. The number of methods with complexity greater than 20 contained in the Linux kernel components are as shown in figure 5. The average complexity ratio (Maximum complexity of methods / Total number of methods) for methods with complexity greater than 20 per significant category in Linux kernel is depicted in figure 6. The first column in figure 6 represents the average complexity for the entire Linux kernel (v2.6.32.9) while the rest of the columns represent the average complexity for significant components contained in the Linux kernel. From figure 6, it is evident that the average complexity for the significant individual components in the Linux kernel


TODAY SOFTWARE MAGAZINE is much higher (more than 2x) than the average complexity for the entire Linux kernel. A quick look through NVD shows that most of published vulnerabilities for Linux kernel occur in components containing a large number of higher complexity methods that mostly include ‘drivers’, ‘filesystems’, and ‘networking’ components as shown in figure 4. The percentage size per significant category in the Linux kernel (version 2.6) is shown in figure 7. The number of lines changed per significant category in the Linux kernel (version 2.6) is shown in figure 8. It is interesting to note that although, the ‘networking’ component of the Linux kernel contains lesser number of higher complexity methods (from figure 5) and lesser Lines of Code (LOC) (from figure 7) as compared to ‘drivers’ and ‘filesystems’ components, a majority of the vulnerabilities published in the NVD occur in the ‘networking’ component (from figure 4), the reason for which was discussed in the previous section on vulnerability analysis. Further, although the ‘networking’ component of the Linux kernel contains lesser LOC, the average complexity (from figure 6) for the ‘networking’ component is the highest when compared to other components of the Linux kernel, which suggests that higher complexity components tend to have higher number of bugs. Another interesting fact to note is that although the ‘architecturespecific’ component of the Linux kernel has lesser number of higher complexity methods (from figure 5), it has higher LOC (from figure 7), and it received significant number of LOC changes (from figure 8). Further, the average complexity (from figure 6) for the ‘architecture-specific’ component is high, which suggests that components receiving higher number of LOC changes tend to have higher complexity. In general, from our analysis, we observe the following patterns: • Higher probability of bugs in components with higher complexity (from figures 4, 5 and 6), e.g., ‘drivers’, ‘filesystems’ and ‘networking’ components. • Complex components with higher LOC received higher number of code changes and updates (from figures 7 and 8), e.g., ‘drivers’ and ‘architecture-specific’ components. • Critical components, networking for example, tend to have higher incidences of bugs being reported (from figure 4). By observing these patterns, we may be able to concentrate the scope of SCA to these critical code components, which is explained in the following sections.

Figure 5. Number of methods with complexity > 20 per significant category in Linux kernel (v2.6)

Figure 6. Average complexity ratio

PROPOSED ALTERNATE WORKFLOW

As mentioned earlier and depicted in figure 1, opensource code incorporated in commercial software is usually not subjected to the same stringent static analysis and review as newly developed proprietary code. In the alternate workflow that we propose, as shown in figure 9, we recommend including SCA tool’s output for both newly developed software and adopted opensource software, in the formal software review package. This workflow will subject the opensource code to stringent code analysis and review process, along with newly developed proprietary code. Any software bug found by static analysis is fixed in both opensource and newly developed code before it is subjected to dynamic analysis and well before it is eventually released into the market. Of course, adhering to the license terms of the adopted opensource code is equally important and may require the communication of the fixes or code modification to the maintainers before the software is shipped. This process may not find all the bugs, but it will help catch certain bugs early on, providing an opportunity to fix these bugs earlier, as demonstrated in the previous section on vulnerability analysis. Further, consider the scenario of doing an upgrade of the open source components that have been incorporated. Adopting the proposed workflow will be even more helpful in such a scenario: • To help decide if an upgrade of the component is feasible by reviewing the bugs being fixed and newly introduced bugs. Sometimes, the extra effort required to fix newly introduced bugs in the upgraded software version may be a gating factor for product release. • To estimate the potential work of porting by understanding the relationship of the open source components and in-house components.

Challenges to adopting the proposed workflow Although, static analysis of adopted opensource code is useful in identifying certain software bugs early on, there are technical and project oriented challenges that may not make this a viable proposition in certain situations as discussed further. • SCA tools tend to produce a large number of false positives. Reviewing and eliminating false positives can be a daunting task to both quality assurance and development teams, especially when projects are experiencing severe resource, time and budget constraints. Once the false positives are eliminated, communicating the genuine issues to the development teams for fixing is necessary. Sometimes, this may require the fixes to be communicated to the opensource software’s maintainers before product shipping, based on the opensource software’s license terms. • Severity ratings for the issues flagged by Klocwork are as shown in figure 10. Klocwork supports 10 severity levels, with 1 (Critical) being the highest and 10 (Info) being the lowest. Similarly, other SCA tools will have their own issue www.todaysoftmag.com | no. 25/July, 2014

37


programming Securing Opensource Code via Static Analysis (II) classification. Due to large number of false positives, tendency to ignore issues flagged as noncritical by SCA tools may result in genuine issues being ignored. Therefore, diligently reviewing issues before ignoring, although a daunting task may in fact be worth the effort. • Diligently performing SCA and reviewing SCA results for all code, opensource or newly developed, requires commitment and dedicated trained resources, that may not be available in projects that are already understaffed and overworked. Although, SCA of adopted opensource code, along with newly developed code, is recommended, the challenges mentioned earlier, necessitate trade-offs due to budget, time and resource constraints that may seem necessary but probaby risky. Some ways of addressing the trade-offs are as follows: • One way could be to concentrate on components of higher complexity in the adopted opensource code and review SCA results for these higher complexity components, which can save time and resources by narrowing down the scope of review. This is based on our analysis as explained in the previous section on complexity analysis, i.e., components of higher complexity tend to have higher incidences of bugs being reported. Although, applying static analysis to the complex code has an interesting side-effect, namely, it will be harder to distinguish false positives because the human now needs to understand this complex code to determine the classification of the warning. • Another way could be to identify critical components of the project, networking for example, and concentrate SCA efforts on these critical components, as demonstrated in the previous section on vulnerability analysis. • Regarding false positives, exploring the nature of false positives might be worth the effort. False positives can either be real false positives or perceptual false positives. The first are instances when the tool is just wrong, or needs to be tuned, or the checker needs to be just turned off for that particular code base. The other category of false positives (perceptual) is more a developer education and/or prioritization issue. Often, especially in the case of security vulnerabilities or more subtle reliability bugs, the tool is correct but either the developer doesn’t understand why it is an issue or isn’t that concerned about the particular issue, i.e., education or prioritization. It is incumbent on the tool vendor to ensure it explains its bug reports clearly, but the other part of the equation is a sensible escalation process inside the organization to more senior developers and/or some education and training.

CONCLUSIONS

Opensource software, Linux for example, is available to any entity seeking to acquire and use it in their products or projects, as long as they adhere to opensource software’s license terms. In our analysis, we have chosen Klocwork as the SCA tool and Linux kernel as codebase, only as examples or representative tools or projects, to highlight the issue of securing opensource code incorporated in commercial software. In general, our analysis can be extended to other SCA tools and other opensource projects. Although opensource projects define proper channels to report security and non-security bugs, these bugs are reported by individual developers as and when they come across them in an ad-hoc manner. Any unbiased dedicated effort to statically analyze the opensource codebase, document and report the findings is absent in the opensource community, although, of late, SCA companies like Klocwork or Coverity [10] have taken up the initiative in this

38

no. 25/July, 2014 | www.todaysoftmag.com

direction. But even then, the rate at which newer versions of the opensource software gets released or updated, presents a barrier to such efforts. Efforts such as National Security Agency (NSA) Center for Assured Software (CAS) programs [11] which presented a study of static analysis tools for C/C++ and Java in the year 2010 using test cases available as Juliet Test Suites [14], is a step in the right direction, but even then, there are no absolute metrics for the choice of a particular SCA tool. Competing SCA vendors tend to find quite different issues in a common codebase, and the overlap becomes almost negligible by including as few as three different tools in the comparison. As a rule, not all issues identified by static analysis tools are by default bugs. A certain percentage of issues can likely be safely ignored after proper review. However, of the total set of found issues, a substantial portion does warrant correction. Although, a detailed further investigation is necessary to ascertain the exploitability of these bugs in runtime situations, for example by fuzzing [4] or by Directed Automated Random Testing (DART) [19], our analysis proves beyond any doubt that applying static analysis on opensource code has the benefit of catching certain critical bugs early on, as opposed to waiting for the opensource community to find and report these bugs, as and when they come across these issues. Identifying these bugs early on implies that these bugs can be fixed earlier. Software vendors incorporating GPL based opensource code in their proprietary software, may be required to make the entire project opensource including their proprietary code. Contrast this with incorporating Apache license or MPL terms based opensource code in their proprietary software, which does not bind the software vendor to make the entire project opensource. This may lead to a situation where part of the software is opensource and the rest is proprietary. Any exploitable bug found in the opensource part of this software can be virulent and the software product can be easily exploited without any effort spent on reverse engineering. Consequently, securing opensource code that is incorporated in commercial software is as critical as securing the proprietary software itself. Therefore, along with carefully validating newly developed proprietary software using SCA, it is also highly desirable to include static analysis of any opensource code necessary for the product, in the general software validation process and perform formal review of the needed opensource code before adopting it. Analogy can be drawn to the build approach. When consuming an open source library, would the developer download a binary and include that directly in the build, or would the developer download the source and build it in the project milestones? By not consuming the binary directly and by downloading the source and building it in the project milestones, the project is protected from the vagaries of somebody else’s build environment that may not be relevant to the project. If so, why not include that same open source code in the static analysis run along with newly developed code? It is well understood in the software industry that the sooner a bug can be detected the cheaper it is to fix, thus avoiding expensive lawsuits or irreparable damage to company reputation. In many ways, this extra effort is critical in improving overall software product quality, reliability and security. Ultimately, organizations that can afford the cost and have a strong need will find value in this effort. Raghudeep Kannavara

raghudeep.kannavara@intel.com Security Researcher, Software and Services Group @Intel USA


management

TODAY SOFTWARE MAGAZINE

Gogu and Mișu’s Old Man

“Holy crapping crap!” Misu burst spitefully. Spitefully, only for those who knew him very well, otherwise – to a stranger – it would have seemed a mere observation, shared calmly but in a deep, whispered voice. Obviously, Gogu was not the case. He looked at Misu, dumbstricken, as he couldn’t believe that somebody had succeeded to get – Misu! – out of his usual mood of remiss understanding and utter detachment from the whirl of the surrounding world. However, it seemed like, finally, the impossible had occurred and the whirl had managed to hook him too, Gogu said to himself, somewhat spiteful. “Holy crap”, emphasized Misu, still calmly, but a little more clearly this time. Gogu could no longer help himself: “Troubles, Misu?” Misu raised his eyes towards Gogu, smiled warmly to him and agreed: “Well… there are a few problems…” That’s incredible, he would be capable of smiling even to the firing squad, thought Gogu, and added some more wood to the fire: “Annoying problems?” “Yeah, I wanted to tell them a thing or two… but I’m over it now. It’s just that I don’t know how to solve this.” “Well, let me hear about it, maybe together we can work it out”, offered Gogu, driven by curiosity. “Look, you know that Chief has appointed me project manager…” “I do”, Gogu interrupted him, “the whole company knows, you have driven us crazy. If you wake us up at night, this is the first thing that comes to our mind.” He noticed Misu’s deeply offended figure and went on in a warmer voice: “Come on, tell us, let’s see what the problem is.” “You know that this project is different…” “All projects are different; that’s why they are called projects”, Gogu couldn’t help saying. “It’s good that you’re smart”, said Misu ironically, but since the topic was much too important to him, he decided to ignore Gogu’s remarks and to go on. “You know damn well what I meant:

that compared to our usual outsourcing projects, this one – which launches a new product – has entirely different characteristics, other requirements, other limitations; it’s not only about development, but there are also marketing issues. In other words, it is much more complex.” “Listen, if you are to begin again from A to Z, and confuse me with the complexity, the stakeholders and so on, please go sing at another table”. Misu’s face expressed utter perplexity: “What do you mean?” He spoke at a slow pace, trying to catch the meaning of Gogu’s words. “What does singing have to do with it?” Oh, brother, he never learns, thought Gogu, where did he grow up, on a sheepfold? “Ha, ha, ha”, he found himself laughing aloud, to Misu’s complete stupefaction. On the sheepfold, indeed! Ha, ha, he couldn’t stop laughing. Misu couldn’t understand the reason for Gogu’s healthy laughter, but he guessed that it somehow had to do with him. So he crossed his arms over his chest, showing that he was expecting clarifications. Which he waited for quite long, as Gogu took a while to stop. “Ok, I’m done, Misu, sorry, it won’t happen again; do go on.” It turned out that the problem was not complicated at all – some people had made some mistakes with the advertising leaflets -, but it had aggravated due to the delay in solving it. The approach had not excelled in diplomacy, the communication was done exclusively through e-mails, and their tone had become more and more severe as the mail ping-pong was prolonged. Things had derailed far from the problem, the last e-mails resuming to threating people, the source of discussions and the problem itself being already forgotten. “And now I come and ask you, Gogu, how do I solve this problem?! As, in the meantime, the rest of the project went on, and we have the official release three days from now. The online ran smoothly, but the live presentation also involves all these leaflets… Oh”, he had a deep sigh, a sigh coming from the bottom of his Transylvanian soul, which could not understand how some words could cause you so many problems. www.todaysoftmag.com | no. 25/July, 2014

39


management Gogu and Mișu’s Old Man “Hi, hi… Did you actually think that it was all over and you can consider the project closed? Ha, ha… Misu, don’t you know that saying: the first 90% of the project take 90% of the time, and for the last 10%, you also need other 90% of time… ha, ha…” “Listen, Gogu, I may not know this one, but I do know the other one, the one saying that I need 42 muscles to frown, 28 to smile, but only 4 to stretch my arm and punch you in the face.” Gogu quickly looked at him, but calmed down, as Misu was still with a smile on his face. Yeah, when he no longer smiles, we will all be in trouble, Gogu thought, trying to figure out his chances when faced to a possible Misu locomotive, and quickly concluding that no one should wish to ever make the giant Transylvanian angry. A giant who knows, however, to play so easily with the kids. Gogu remembered the holiday that had just ended. And then he had the revelation! “Listen, Misu, do you remember how you played the castellan who repaired his fortifications before the enemy’s assault, last week?” Slightly surprised by the turn of the discussion, Misu quickly came back to himself and happily, but also proudly, nodded in approval. They had been together on holiday, to the seaside, Misu’s first contact with the Greek sun, turquoise sea and white, shinning sand. A situation had occurred and Misu’s wife was forced to stay at home, but Gogu didn’t have the heart to deny his son the holiday the little one had been dreaming about for months. So he asked Misu to join them. And what had been a random invitation, turned into a great holiday for Misu and the kid, who had a lot of fun together. Misu remembered the scene of repairing the fortifications in the Greek sand and his face brightened. “You mean to say that what worked with your son might work here, too? Well, that’s right, of course it would work; wait a second, I know exactly what I’ve got to do…” and he returned to his computer, totally ignoring Gogu. “More exactly, what is it that he did with your son?” Gogu saw Chief a second before he made his presence felt through the whispered question, as he didn’t want to disturb Misu. It’s not that he would hear anything, once wrapped up in his things, you can fire a gun by his ear and he completely ignores you, thought Gogu, then he continued aloud: “Well, I was having a hard time getting the kid out of the water, but he was like no and no, ‘cause that’s why people go to the seaside, to bathe. And I tried everything, talking nicely, threatening, we ended up screaming (me) and complete ignoring (him), ugly words (me), just wait till I tell mother what you said (him),… a true war. And in the middle of the fight, I hear Misu, with his calm that drives us mad, telling my son he needs help with his wonderful sand edifice, that he can no longer manage alone to build the fortification, or a defense wall, whatever. And my son instantly gets out of the water and starts to work. It was not until five minutes later that I recovered from the shock… Obviously, in the evening, I had my share of introduction to child psychology,

40

no. 25/July, 2014 | www.todaysoftmag.com

in the vision of Misu/ Misu’s old man.” “Hm… not bad, this vision,” smiled Chief. “I suppose we’ll also have our share of Introduction into the vision till tonight … Keep me posted.” *** The introduction to the vision, however, came much later, namely in the meeting of project closure, about two weeks after the official release of the new product; a successful release which brought congratulations to the entire team coordinated by Misu. In the meeting, Misu asked each member of the team to talk about three things: what they think went well, what didn’t go well and, finally, what exactly they would do differently, were they to start the project all over again. The Chief wanted to know how they solved the wrong print problem, thus triggering the vision: “It wasn’t such a big deal. People were upset they couldn’t understand each other through mails, who was to blame, who said what and what they requested, where the bottom of things was and whom should we punish. My old man taught me to find out only what the problem is and try to solve only that. It was not my problem to find who started what, but to find a way to get the right materials in time for the release. So, I wrote a nice e-mail, asking them to help me keep the date announced for the official event and to send the correct materials directly to the expo area. Which they did, and I thanked them for that. That’s pretty much what my old man taught me: he used to say that people react quicker to a help request than to an indication…”

Simona Bonghez, Ph.D.

simona.bonghez@confucius.ro Speaker, trainer and consultant in project management, Owner of Colors in Projects



sponsors

powered by


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.