Title
atodorov.org - you can logoff, but you can never leave
Go Home
Description
Address
Phone Number
+1 609-831-2326 (US) | Message me
Site Icon
atodorov.org - you can logoff, but you can never leave
Tags
Page Views
0
Share
Update Time
2022-10-09 17:57:05

"I love atodorov.org - you can logoff, but you can never leave"

www.atodorov.org VS www.gqak.com

2022-10-09 17:57:05

Toggle navigation atodorov.org Pylint Workshop atodorov.org you can logoff, but you can never leave Open-Source Security Best Practices You Can't Ignore in 2020 Open source components are incredibly useful in shortening development time.Open source projects are created, maintained, and used by developers of alllevels and companies of all sizes. However, you can’t always determine whocreated the code and who edited the project. For all you know, there’s apiece of spyware hiding somewhere in the codebase. Read on to learn how toapply open source security in 2020.What Is Open-Source Software?Open-source software uses freely available code so that anyone can viewand modify it. It is created collaboratively by communities of developersat no charge. Some of the most popular open-source programs are Linux,Kubernetes, Jenkins, and WordPress.Open-source software can have manydifferent licensing terms.There are more than 1400 different open-source licenses, the most common of which areMIT, GPL, and Apache. Most licenses have two things in common:Licenses do not require a license fee for the softwareLicenses allow anyone to contribute or modify to the programOpen-source software isn’t always free of charge-companies often charge for support,implementation, and additional features added on to open-source components. However,open-source software can be cheaper to implement. This cost savings is why modernenterprise software relies heavily on open source components. Likewise, many popularcommercial applications use thousands of open source components as part of their code.Open-Source Risks You Must Know AboutThere are several risks you might face when using and including open-source components.Public Nature of VulnerabilitiesOpen-source code is publicly available for inspection. This allows community membersto contribute to identifying and fixing vulnerabilities. Ideally, contributors candevelop patches quickly, before the vulnerability is made public.Once discovered, open-source vulnerabilities are published on the National Vulnerability Database(NVD). This database is publicly available and searchable,meaning that both open-source users and hackers can see vulnerability information.Hackers use this public availability to their advantage, attempting to exploitvulnerabilities as soon as a flaw is announced. This can enable hackers to attacksystems before users get a chance to apply patches.A well-known example of this exploitation is the Equifax breach, in which 143 millionrecords were compromised. This breach occurred because attackers were able to exploita known vulnerability in the open-source Apache Struts framework. Although thisvulnerability was made public several years before, Equifax never patched their systemsto protect against it.License and Use InfringementOpen-source projects lack standard commercial controls, trusting contributors to act ethically.Unfortunately, this means that proprietary code may get included in projects without aproject maintainer’s awareness.An example of this occurring was seen in a case brought by SCO Group.They accused IBM of including part of their proprietary code, into Project Monterey.This code was unknowingly incorporated through open-source components that IBM included in the project.Operational RisksOperational inefficiencies can be a major source of risk when using open-sourcecomponents. In particular, inefficiencies caused by inadequate tracking ormonitoring of components. If you are unaware of what components you have or wherecomponents are stored, you cannot ensure your systems are up to date.The possibility of losing support for a component is another risk you might face.Open-source projects are based on voluntary engagement. If a community losesinterest in a project, it can see decreased support or be dropped entirely.For such projects, you become directly responsible for ensuring that vulnerabilitiesare identified and patched.To address these risks, you need to ensure that you maintain an inventory of components.Doing so can provide visibility of your risks and can help ensure that you are usingcomponents uniformly. Often, this means using software composition analysis tools toautomate this process and reduce manual labor.Best Practices For Using Open-Source Securely in 2020As the number of open-source projects increases, the likelihood that your systems willinclude open-source components increases. To ensure that these components providemaximum benefit with minimum risk, there are severalopen source securitybest practices you should adopt.Balance Functionality and RiskYou may be able to gain the functionality you need with just part of anopen-source project. When considering the inclusion of an open-source project,evaluate its components before you include anything. You may find that you onlyneed one library or service instead of an entire project. By limiting what you include,you can reduce the risk of including additional vulnerabilities and simplify integration.Consider Historical SecurityTo be considered secure, code must be reviewed and tested for vulnerabilities.However, testing takes time and testing tools can be expensive so it may beoverlooked in open-source projects. You can get a better idea of the overallsecurity of a project by evaluating how security is addressed in a project’sdocumentation. If a project doesn’t specify how vulnerabilities are identifiedor what measures are taken to prevent flaws, you should be wary.Before including components, consider the security history of a project,including the average number and type of bugs per release. If a project has ahistory with lots of vulnerabilities, consider looking for an alternative.You should also take into account how long it takes a community to fix vulnerabilitiesonce reported. Slow fixes can signal weak community support or significant issueswith the source code.Consider Community Size and EngagementOpen-source software is typically supported by volunteers, including amateur developers.This means projects can suffer from a lack of consistency. Ideally, projects have amedium to large community base. This signals that quality is likely to be higherand that projects are unlikely to be abandoned.You should also consider the size and frequency of releases a community is putting out.If releases are haphazard or infrequent, you will have a harder time maintaining anycomponents you include. For the most reliable projects, release schedules are setand you can anticipate the amount of effort to devote to maintenance.ConclusionHopefully, this article helped you learn the importance of open source security.In a time when networks become increasingly distributed, securing your applicationsbecomes a crucial element of the development process. Many developers have alreadyrealized that and are in the process of shifting security to the left. That meansyou’re putting security as a top priority throughout all development stages toensure your code is as secure as possible.Author BioGilad David Maayan is a technology writer who has worked with over 150 technologycompanies including SAP, Samsung NEXT, NetApp and Imperva, producing technical andthought leadership content that elucidates technical solutions for developers and IT leadership.LinkedIn:https://www.linkedin.com/in/giladdavidmaayan/ Posted by Gilad David Maayan on Tue 11 February 2020 There are comments. Comparing equivalent Python statements While teaching one of my Python classes yesterday I noticed a conditional expressionwhich can be written in several ways. All of these are equivalent in their behavior:if os.path.isdir(path) is False: passif os.path.isdir(path) is not True: passif os.path.isdir(path) == False: passif os.path.isdir(path) != True: passif not os.path.isdir(path): passMy preferred style of writing is the last one (not os.path.isdir()) because itlooks the most pythonic of all. However the 5 expressions are slightly differentbehind the scenes so they must also have different speed of execution(click operator for link to documentation):is - identity operator, e.g. both arguments are the same object as determined by the id() function. In CPython that means both arguments point to the same address in memoryis not - yields the inverse truth value of is, e.g. both arguments are not the same object (address) in memory== - equality operator, e.g. both arguments have the same value!= - non-equality operator, e.g. both arguments have different valuesnot - boolean operatorIn my initial tweet I mentionedthat I think is False should be the fastest. Kiwi TCMS teammember Zahari countered with not to be the fastestbut didn't provide any reasoning!My initial reasoning was as follows:is is essentially comparing addresses in memory so it should be as fast as it gets== and != should be roughly the same but they do need to "read" values from memory which would take additional time before the actual comparison of these valuesnot is a boolean operator but honestly I have no idea how it is implemented so I don't have any opinion as to its performanceUsing the following performance test script we get the average of 100 repetitionsfrom executing the conditional statement 1 million times:#!/usr/bin/env pythonimport statisticsimport timeitt = timeit.Timer("""if False:#if not result:#if result is False:#if result is not True:#if result != True:#if result == False: pass""","""import osresult = os.path.isdir('/tmp')""")execution_times = t.repeat(repeat=100, number=1000000)average_time = statistics.mean(execution_times)print(average_time)Note: in none of these variants the body of the if statement is executed sothe results must be pretty close to how long it takes to calculate theconditional expression itself!Results (ordered by speed of execution):False _______ 0.009309015863109380 - baselinenot result __ 0.011714859132189304 - +25.84%is False ____ 0.018575656899483876 - +99.54%is not True _ 0.018815848254598680 - +102.1%!= True _____ 0.024881873669801280 - +167.2%== False ____ 0.026119318689452484 - +180.5%Now these results weren't exactly what I was expecting. I thought not will come inlast but instead it came in first! Although is False came in second it is almosttwice as slow compared to baseline. Why is that ?After digging around inCPython I found the following definitionfor comparison operators:Python/ceval.cstatic PyObject * cmp_outcome(int op, PyObject *v, PyObject *w){ int res = 0; switch (op) { case PyCmp_IS: res = (v == w); break; case PyCmp_IS_NOT: res = (v != w); break; /* ... skip PyCmp_IN, PyCmp_NOT_IN, PyCmp_EXC_MATCH ... */ default: return PyObject_RichCompare(v, w, op); } v = res ? Py_True : Py_False; Py_INCREF(v); return v;}where PyObject_RichCompare is defined as follows (definition order reversedin actual sources):Objects/object.c/* Perform a rich comparison with object result. This wraps do_richcompare() with a check for NULL arguments and a recursion check. */PyObject * PyObject_RichCompare(PyObject *v, PyObject *w, int op){ PyObject *res; assert(Py_LT ob_type && PyType_IsSubtype(w->ob_type, v->ob_type) && (f = w->ob_type->tp_richcompare) != NULL) { checked_reverse_op = 1; res = (*f)(w, v, _Py_SwappedOp[op]); if (res != Py_NotImplemented) return res; Py_DECREF(res); } if ((f = v->ob_type->tp_richcompare) != NULL) { res = (*f)(v, w, op); if (res != Py_NotImplemented) return res; Py_DECREF(res); } if (!checked_reverse_op && (f = w->ob_type->tp_richcompare) != NULL) { res = (*f)(w, v, _Py_SwappedOp[op]); if (res != Py_NotImplemented) return res; Py_DECREF(res); } /********************************************************************** IMPORTANT: actual execution enters the next block because the bool type doesn't implement it's own `tp_richcompare` function, see: Objects/boolobject.c PyBool_Type (near the bottom of that file) ***********************************************************************/ /* If neither object implements it, provide a sensible default for == and !=, but raise an exception for ordering. */ switch (op) { case Py_EQ: res = (v == w) ? Py_True : Py_False; break; case Py_NE: res = (v != w) ? Py_True : Py_False; break; default: PyErr_Format(PyExc_TypeError, "'%s' not supported between instances of '%.100s' and '%.100s'", opstrings[op], v->ob_type->tp_name, w->ob_type->tp_name); return NULL; } Py_INCREF(res); return res;}The not operator is defined in Objects/object.c as follows (definition orderreverse in actual sources):Objects/object.c/* equivalent of 'not v' Return -1 if an error occurred */int PyObject_Not(PyObject *v){ int res; res = PyObject_IsTrue(v); if (res > 9 LOAD_CONST 0 (None) 12 RETURN_VALUE None--------------- if not result -------------------- 0 LOAD_FAST 0 (result) 3 POP_JUMP_IF_TRUE 9 6 JUMP_FORWARD 0 (to 9) >> 9 LOAD_CONST 0 (None) 12 RETURN_VALUE None--------------- if result is False --------------- 0 LOAD_FAST 0 (result) 3 LOAD_GLOBAL 0 (False) 6 COMPARE_OP 8 (is) 9 POP_JUMP_IF_FALSE 15 12 JUMP_FORWARD 0 (to 15) >> 15 LOAD_CONST 0 (None) 18 RETURN_VALUE None--------------- if result is not True ------------ 0 LOAD_FAST 0 (result) 3 LOAD_GLOBAL 0 (True) 6 COMPARE_OP 9 (is not) 9 POP_JUMP_IF_FALSE 15 12 JUMP_FORWARD 0 (to 15) >> 15 LOAD_CONST 0 (None) 18 RETURN_VALUE None--------------- if result != True ---------------- 0 LOAD_FAST 0 (result) 3 LOAD_GLOBAL 0 (True) 6 COMPARE_OP 3 (!=) 9 POP_JUMP_IF_FALSE 15 12 JUMP_FORWARD 0 (to 15) >> 15 LOAD_CONST 0 (None) 18 RETURN_VALUE None--------------- if result == False --------------- 0 LOAD_FAST 0 (result) 3 LOAD_GLOBAL 0 (False) 6 COMPARE_OP 2 (==) 9 POP_JUMP_IF_FALSE 15 12 JUMP_FORWARD 0 (to 15) >> 15 LOAD_CONST 0 (None) 18 RETURN_VALUE None--------------------------------------------------The last 3 instructions are the same (that is the implicit return None of the function).LOAD_GLOBAL is to "read" the True or False boolean constants andLOAD_FAST is to "read" the function parameter in this example.All of them _JUMP_ outside the if statement and the only difference iswhich comparison operator is executed (if any in the case of not).UPDATE 1:as I was publishing this blog post I read the following comments fromAmmar Askar who also gave me a few pointers on IRC:Note that this code path also has a direct inlined check for booleans, which should help too: https://t.co/YJ0az3q3qu— Ammar Askar (@ammar2) December 6, 2019 So go ahead and take a look atcase TARGET(POP_JUMP_IF_TRUE).UPDATE 2:After the above comments from Ammar Askar on Twitter and fromKevin Kofler below I decided to try and change one of the expressions a bit:t = timeit.Timer("""result = not resultif result: pass""","""import osresult = os.path.isdir('/tmp')""")that is, calculate the not operation, assign to variable and then evaluate theconditional statement in an attempt to bypass the built-in compiler optimization.The dissasembled code looks like this:0 LOAD_FAST 0 (result)2 UNARY_NOT4 STORE_FAST 0 (result)6 LOAD_FAST 0 (result)8 POP_JUMP_IF_FALSE 1010 LOAD_CONST 0 (None)12 RETURN_VALUE NoneThe execution time was around 0.022 which is between is and ==. However thenot result operation itself (without assignment) appears to execute for 0.017which still makes the not operator faster than the is operator, but only just!Like already pointed out this is a fairly complex topic and it is evidentthat not everything can be compared directly in the same context (expression).P.S.When I teach Python I try to explain what is going on under the hood. SometimesI draw squares on the whiteboard to represent various cells in memory and visualizethings. One of my students asked me how do I know all of this? The essentials(for any programming language) are always documented in its official documentation.The rest is hacking around in its source code and learning how it works. This isalso what I expect people working with/for me to be doing!See you soon and Happy learning! Posted by Alexander Todorov on Fri 06 December 2019 There are comments. How to start solving problems in the QA profession 3 months ago Adriana and I hosted a discussion panel atQA: Challenge Accepted conference together withAleksandar Karamfilov (Pragmatic), Gjore Zaharchev (Seavus, Macedonia) andSvetoslav Tsenov (Progress Telerik). The recording is available below in mixedBulgarian and English languages:The idea for this was born at the end of the previous year mainly because I wasdisappointed by what I was seeing in the local (and a bit of European) QA communities.Inthis interviewEvgeni Kostadinov (Athlon) says:I would advise everyone who is now starting into Quality Assurance to displaymastership at work.This is something that we value very strongly in the open source world. For examplein Kiwi TCMS we've built a team of people who contribute ona regular basis, without much material rewards, constantly improve their skills,show progress and I (as the project leader) am generally happy with their work. OTOHI do lots of in-house training at companies, mostly teaching programming to testers(Python & Ruby). Over the last 2 years I've had 30% of people who do fine, 30% of peoplewho drop out somewhere in the middle and 30% of people who fail very early in the process.That is 60% failure rate on entry level material and exercises!All of this goes to show that there is big disparity between professional testing and theopen source world I live in. And I want to start tackling the problems because I want thetesters in our communities to really become professional in their field so that we can workon lots more interesting things in the future. Some of the problems that I see are:Lack of personal motivation - many people seem comfortable at entry level positions and when faced with the challenge to learn or do something new they fail big timeUsing the wrong titles/job positions in the wrong context - calling QA somebody who's clearly a tester or calling Senior somebody who barely started their career. All of that leads to confusion across the boardLack of technical skills, particularly when it comes to programming - how would you expect to do software testing if you have no idea how that software is built ?!? How are you going to get advantage of new tools and techniques when most of them are based around automation and source code ?!?MotivationI am strong believer that personal motivation is key to everything. However this is alsoone of my weakest points. I don't know how to motivate others because I never felt theneed for someone else to motivate me. I don't understand why there could be people whoare seemingly satisfied with a very low hanging fruit when there are so many opportunitieswaiting for them. Maybe part of my reasoning is because of my open source backgroundwhere DIY is king, where "Talk is cheap. Show me the code." is all that matters.Discussion starts with Svetoslav who doesn'thave a technical education/background. He's changed profession later in life and inrecent years has been speaking at some events about testing they do in theNativeScript team.Svetoslav: He realized that he needs to make a change in his life,invested lots in studying (not just 3 months) all the while traveling between his home townand Sofia by car and train and still keeping his old job to be able to pay the bills.He sees the profession not as a lesser field compared to development but as equal.That is he views himself as an engineer specializing in testing.Aleksandar: There are no objective reasons for some people to be doing very goodin our field while others fail spectacularly. This coming from the owner of one of thebiggest QA academies in the country. A trend he outlines is the folks who come forknowledge and put their effort into it and the ones who are motivated by the relativelyhigh salary rates in the industry. In his opinion current practitioners should notbe giving false impression that the profession is easy because there are equally harditems as in any other engineering field. Wrong impression about how hard/easy it isto achieve the desired monetary reward is something that often leads to failure.Gjore: Coming from his teaching background at the University of Niš he says peoplegenerally have the false impression they will learn everything by just attendinglectures/training courses and not putting effort at home. I can back this up 100%judging by performance levels of my corporate students. Junior level folks oftendon't understand how much they need to invest into improving their skills especiallyin the beginning. OTOH job holders often don't want to listen to others because theythink they know it all already. Another field he's been experimenting with is amentoring program.Tester, QA, QE, etc - which is what and why that mattersIMO part of the problem is that we use different words to often describe the same thing.Companies, HR, employees and even I are guilty of this. We use various termsinterchangeably while they have subtle but important differences.As a friend of mine told meeven if you write automation all the time if you do it after the fact(e.g. after a bug was reported) then you are not QA/QE - you are a simple tester(with a slightly negative connotation)Aleksandar: terminology has been defined long time ago but the problem comes fromjob offers which use the wrong titles (to make the position sound sexier). Anotherproblem is the fact that Bulgaria (also Macedonia, Serbia and I dare say Romania) arepredominantly outsourcing destinations: your employer really needs testers but fiercecompetition, lack of skilled people (and distorted markets), etc leads to distortionin job definitions. He's blaming companies that they don't listen enough to theiremployees. Note: there's nothing bad in being "just a tester" executing test scenarios and reportingbugs. That was one of the happiest moments in my career. However you need to be aware ofwhere you stand, what is required from you and how you would like to develop in the future.Svetoslav: Doesn't really know all the meaning of all abbreviations and honestlydoesn't really care. His team is essentially a DevOps team with lots of mixed responsibilitywhich necessitates mixed technical and product domain skills. Note that Progress is bycontrast a product company, which is also the field I've always been working in. That isto be successful in a product company you do need to be a little bit of everythingat different times so the definition of quality engineer gets stretched and skeweda lot.Gjore: He's mostly blaming middle level management b/c they do not possesall the necessary technical skills and don't understand very well the nature oftechnical work. In outsourcing environment often people get hired just toprovide head count for the customer, not because they are needed. Software testingis relatively new on the Balkans and lots of people still have no ideawhat to do and how to do it. We as engineers are often silent and contribute tothese issues by not raising them when needed. We're also guilty of notfollowing some established processes, for example notattending some required meetings (like feature planning) and by doing sonot helping to improve the overall working process. IOW we're not alwaysprofessional enough.Testers and programming Testers should be code literate. Reading code is a crucial skill for any tester and writing code has so many uses beyond just boilerplate automation. https://t.co/Tts0rzHI4Y — Amber Race (@ambertests) March 24, 2019On one of my latest projects we've burned throughthe following technologies in the span of 1 year: Rust, Haskell, Python, React, all sortsof cloud vendors (pretty much all of them) and Ansible of course. Testing was adjustedas necessary and while hiring we only ask for the person to have adequate codingskills in Python, Bash or any other language. The rest they have to learn accordingly.So what to do about it? My view is that anyone can learn programming but not manypeople do it successfully.Svetoslav: To become an irreplaceable test engineer you need skills. Broad technicalskills are a must and valued very highly. This is a fact, not a myth. Information iseasily accessible so there's really no excuse not to learn. Mix in product and businessdomain knowledge and you are golden.Aleksandar: Everyone looks like they wish to postpone learning something new, especiallyprogramming. Maybe because it looks hard (and it is), maybe because people don't feelcomfortable in the subject, maybe because they haven't had somebody to help themand explain to them critical concepts. OTOH having all of that technical understandingactually makes it easier to test software b/c you know how it is built and how it works.Sometimes the easiest way to explain something is by showing its source code (I do this a lot).Advice to senior folks: don't troll people who have no idea about something they'venever learned before. Instead try to explain it to them, even if they don't want to hear it.This is the only way to help them learn and build skills. In other words: be a goodteam player and help your less fortunate coworkers.Gjore: A must have is to know the basic principles ofobject oriented programmingand I would add also SOLID. With the ever changinglandscape of requirements towards our profession we're either into the process of changeor out of this process.Summary and action itemsThe software testing industry is changing. All kind of requirements are pushing ourprofession outside its comfort zone, often outside of what we signed up for initially.This is a fact necessitated by evolving business needs and competition. This is equallytrue for product and outsourcing companies (which work for product companies after all).This is equally true for start-ups, SME and big enterprises.Image from No Country for Old QA, Emanuil Slavov (Komfo)What can we do about it ?Svetoslav: Invest in building an awesome (technical) team. Make it a challenge tolearn and help your team mates to learn with you. However be frank with yourself and withthem. Ask for help if you don't know something. Don't be afraid to help other peoplelevel-up because this will ultimately lead to you leveling-up.Aleksandar: Industry should start investing in improving workers qualification levelbecause Bulgaria is becoming an expensive destination. We're on-par with some companiesin western Europe and USA (coming from a person who also sells the testing service).Without raising skills level we're not going to have anything competitive to offer.Also pay attention to building an inclusive culture especially towards people on thelowest level in terms of skills, job position, responsibilities, etc.Gjore: Be the change, drive the change, otherwise it is not going to happen!So here are my tips and tricks the way I understand them:Find your motivation and make sure it is the "correct" one - there's nothing wrong in wanting a higher salary but make sure you are clear that you are trading in your time and knowledge for that. Knowing what's in it for you will help you self motivate and pull yourself through hard timesFind a mentor if possible - I've never had one so I can't offer much advise hereSoftware testing is hard, no kidding. Some researchers claim it is even harder than software development because the field of testing encompasses the entire field of developmentOnce you understand the concepts and how things work it becomes easy. We do have very fast rate of technology change but most of the things are not fundamental paradigm change. Building on this basic knowledge makes things easier (or to put it mildly: everything has been invented by IBM in the 1970s)You will not learn everything (not even close) in a short course. I've spent 5 years in engineering university learning how software and hardware works. I've been programming for the past 20 years every single day. This makes it easier but there are lots of things I have not idea about. 30-60 minutes of targeted learning and applying what you learn goes a long way over the course of many yearsInvest in yourself, nobody is going to do it for you. If you look at github.com/atodorov you will notice that everything is green. If you drill down by year you will find this is the case for the past 3-4 years only. The 10 years before that I've spent building up to this moment. It is only now that I get to reap some of the benefits of doing so (like a random Silicon Valley startup telling me they are fans of my work or being invited as a speaker at events)Programming is hard, when you don't know the basic concepts and when you lack the framework to think about abstractions (loops, conditionals, etc). When you learn all of this it becomes harder because you need to learn different languages and frameworks. However it is not impossible. There are lots of free materials available online, now more than everThink about your "position" in the team/company. What do you do, what is required of you, how can you do it better ? Call things with their real names and explain to your coworkers which is what. This will bring more consistency in the entire communityLots of these items sound cliche but they are true. There's nothing stopping you frombecoming the best QA engineer in the world but you.To be continuedThis first discussion was born out of necessity and is barely scratching the surface.The format is not ideal. We didn't present multiple points of view.We didn't have time to prepare for it to be honest!Gjore and I made a promise to continue the discussion bringing it to Macedonia and Serbia.I am hoping we can also bring other neighboring countries like Romania and Greece on boardand learn from mutual experience.See you soon and Happy testing! Posted by Alexander Todorov on Mon 29 July 2019 There are comments. Contributing to Open Source with Docker, Inc The rumors have finally been confirmed. Docker, Inc. is opening their newR&D center in Sofia. At an event last night, they stated their intentions to do a fair amountof product development in Sofia as well as contribute to the local society/communitytoo (if I got this correctly). This is very good news for the local eco-system socongrats for that from my side!This blog post outlines my impressions from the event and a few related more generalthoughts.How did Docker came to Sofia?I don't know the details but their top team in Bulgaria seems to be comingdirectly from VMware. So were other engineers present at the event who arebased elsewhere. When you think about it this is not surprising at all.(FTR VMware is also directly responsible for having Uber engineering in Sofia).VMware is one of the few companies in Bulgaria that does real product developmentand R&D (credit where credit is due).There's even a smaller number of companies developing infrastructureproducts, e.g. the same things I test on behalf of Red Hat. The majority of theother companies are either outsourcing or focused on products in upper layersof the stack!Contributing to Open Source according to Docker (and myself)This BoF session was lead by Andrew Hsu and Sebastiaan van Stijn.The group was predominantly inexperienced in terms of OSS contribution butmotivated to try/find a project where they can contribute. From what I couldtell they were relatively experienced software engineers.On my question "what are they planning for the local community in Sofia?" theimmediate answer was meet-ups and presentations which is expected. This is howyou start and try to establish the level of experience of the local groupsand their level of interest in what you are doing.I prompted a bit further about workshops or hackathons and they told methey've had a hacking even in Paris but didn't elaborate much further. Maybeit is too early for them to be able to give more detailed answers.Let's hope we'll see more practical events.Andrew did outline the general principles of their community (aka don't be a jerk),pointed out the various communication channels they have (rip IRC), the fact thatinternally the company uses GitHub and encourages cross-team participation viathe pull request workflow.This is what my friends at Bitergia call "inner source"and is a good thing!A few of the participants asked how and where to start and all I kept hearing was"follow pull requests on GitHub", "do you want to see some source code"! This issomething I take issue with so let me explain.While that has been the historical model for doing open source, aka dig straightinto the problem, and also how I started and still do open source I think it doesn'twork in the modern world. What I've seen from my students and folks which I've trainedis that they have far too many opportunities to be bothered to dig deep into somethingwhich seemingly puts roadblocks on every step of the way. Especially when you area new comer. I myself experience this regularly and often get frustrated bycommunities who make it damn nearly impossible to land a code change. My only motivationhere is that I depend on that component being fixed and there is no work-around.To be fair to Docker our Fedora or Red Hat communities aren't much better in this regard.In most of the projects I've contributed they kind of expect you to be motivatedenough and be able to figure out both process and technical details mostly on your own!Maybe it is the nature of working on platform and infrastructure. You do need a fair bitof general knowledge and specific system knowledge to work on such projects.My personal experience leading Kiwi TCMS has been that mostcontributors need a long time to settle in and feel comfortable in the project andthat they do need a fair amount of hand holding.First and fore-most many contributors don't know the underlying technology well enough.For complex software there's also the whole issue of computer science 101, operating systems,how the kernel or virtualization engine works, etc. Then you need to know the architectureof the software you want to fix, the libraries and frameworks it uses - this helps youquickly navigate to the place which needs a patch. Honestly this takes years to masterand to develop a gut feeling about it. On the outside it may look easy because activecontributors have had many years of experience acquiring this knowledge.Then we have the "process" part. How do I open or rebase a pull request. How to amendcommits, etc. This is something I learned the hard way but I've shown it to otherpeople and they were able to advance much more quickly. Also things like how do you communicatewith others in the community, how do you "push" for some types of changes, etc.Dedicated mentors will help a lots here, but that also means dedicated contributors.We do provide a detailed technical trainingand on-boarding program and mentoring for Kiwi TCMS and still there are more people who give it a tryand drop out compared to those who stay with the team.We still expect commitment and finishing the tasks one set out to complete though.My initial impression (from Docker) for the moment is very guarded and mostly critical.I feel like they are interested in finding folks to contribute to their own repositoriesand then hire them (that is expected) but I don't feel like they care much about whathappens outside their own projects. I hope I am wrong and we do see engineers (regardlessof who employs them) contributing all over the place on a regular basis.What is the problem ?The problem for the local eco-system (and it is a world wide problem)is that there are many companies coming in but there is a very limited pool of talent.Especially in less popular fields like research, operating systems and low level infrastructure.That takes many years to develop in house and to reach critical mass for a thrivingcommunity. I don't feel we are there yet!In a later blog post I will describe the history of ScyllaDB whichis the measure of success I would like to see in Bulgaria.The problem I see for the open source community (in the country) is that nobody is reallyworking on developing that. There are small efforts by individuals or a few companies butthe mechanics of open source and the culture of free sharing of knowledge is somethingI don't see yet. I fail to see a program, like Google Summer of Code perhaps, wheredevelopers are encouraged and supported to contribute just for the sake of contributing.Also I fail to see a structure which will help new contributors andyoung developers set out on a path of meaningful contributions early in their careerand by doing this improve their skills and personal brand which ties in with the firstparagraph.These are some things I have observed and some gut feelings from someone who's been doingopen source for 15+ years. I can't pin point exact reason why this is happening.I don't have a recipe how to fix it!I do however keep in touch with like minded folks from several other companies and we'vediscussed these topics occasionally. We do have some ideas but lack critical mass,shared goal and self-organization. Posted by Alexander Todorov on Fri 31 May 2019 There are comments. The Art of [Unit] Testing A month ago I held a private discussional workshop for a friend's company in Sofia.With people at executive positions on the tech & business side we discussedsome of their current problems with respect to delivering a quality product.Additionally I had a list of pre-compiled questions from members of the technical team,young developers, mostly without formal background in software testing!Some of the answers were inspired byThe Art of Unit Testing by Roy Osherove hence the title!QuestionsTypes of testing, general classificationThere aremany types of testing!Unit, Integration, System, Performance and Load, Mutation, Security, etc. Betweendifferent projects we may use the same term to refer to slightly different typesof testing.For example in Kiwi TCMS we generally test with a database deployed,hit the application through its views (backend points that serve HTTP requests) and asserton the response of these functions. The entire request-response cycle goes through theapplication together with all of its settings and add-ons! In this project we aremore likely to classify this type of testing as Integration testing although at timesit is more closer to System testing.The reason I think Kiwi TCMS is more closer to integration testing is because we executethe tests against a running development version of the application! The test runner processand the SUT process are in the same memory space (different threads sometimes).In contrast full system testing for Kiwi TCMS will mean building and deploying the dockercontainer (a docker compose actually), hitting the application through the layerexposed by Docker and asserting on the results. Here test runner and SUT are two distinctlyseparate processes. Here we also have email integration, GitHub and Bugzilla integration,additional 3rd party libraries that are installed in the Docker imaga, e.g. kerberosauthentication.In another example forpelican-ab we mostly have unittests which show the SUT as working. However pelican-ab for a static HTML generatorand if failed miserably with DELETE_OUTPUT_DIRECTORY=True setting! The problem here is thatDELETE_OUTPUT_DIRECTORY doesn't control anything in the SUT but does controlbehavior in the outer software! This can only be detected with integration tests,where we perform testing of all integrated modules to verify the combined functionality,see here.As we don't depend on other services like a database I will classify this as pure integrationtesting b/c we are testing a plugin + specific configuration of the larger system which enforces moreconstraints.My best advice is to:1) have a general understanding of what the different terms mean in the industry2) have a consensus within your team what do you mean when you say X type of testing and Y type of testing so that all of you speak the same language3) try to speak a language which is closest to what the rest of the industry does, baring in mind that we people abuse and misuse language all the time!What is unit testingThe classical definition isA unit test is a piece of code (usually a method) that invokes another piece of codeand checks the correctness of some assumptions afterwards. If the assumptions turn outto be wrong the unit test has failed.A unit is a method or function.Notice the emphasis above: a unit is method or a function - we exercise these in unit tests.We should be examining their results or in a worse case the state of the class/modulewhich contains these methods! Now also notice that this definition is different from theone available in the link above. For reference it is42) Unit TestingTesting of an individual software component or module is termed as Unit Testing.Component can be a single class which comes close to the definition for unit testing butit can be several different classes, e.g. an authentication component handling several differentscenarios. Modules in the sense of modules in a programming language almost always containmultiple classes and methods! Thus we unit test the classes and methods but we can rarelyspeak about unit testing the module itself.OTOH the second definition gets the following correctly:It is typically done by the programmer and not by testers, as it requires a detailedknowledge of the internal program design and code.In my world, where everything is open source we testers can learn how the SUT and itsclasses and methods work and we can also write pure unit tests. For example incodec-rpmI had the pleasure to add very pure unit tests - call a function and assert on its result,nothing else in the system state changed (that's how the software was designed to work)!Important:Next questions ask about how to ... unit test ... and the term "unit test" in them isused wrongly! I will drop this and only use "test" to answer!Also important - make the difference between unit type test and another type oftest written with a unit testing framework! In most popular programming languages unittesting frameworks are very powerful! They can automatically discover your test suite (discovery),execute it (test runner), provide tooling for asserting conditions (equal, not equal, True,has changed, etc) and tooling for reporting on the results (console log, HTML, etc).For example Kiwi TCMS is a Django application and it uses the standard test frameworkfrom Django which derives from Python's unittest! A tester can use pretty much any kindof testing framework to automate pretty much any kind of test! Some frameworks just makeparticular types of tests easier to implement than others.How to write our tests without touching the DB when almost all business logic iscontained within Active Record objects? Do we have to move this logic outside Active Record,in pure PHP classes that don't touch DB?To answer the second part - it doesn't really matter. Separating logic from database isa nicer design in general (loosely coupled) but not always feasible. Wrt testing you can eithermock calls to the database or perform your tests with the DB present.For example Kiwi TCMS is a DB heavy applcation. Everything comes and goes to thedatabase, it hardly has any stand-alone logic. Thus the most natural way to test is togetherwith the database! Our framework provides tooling to load previously prepared test data(db migrations, fixtures) and we also use factoryboy to speed up creation of ORM objectsonly with the specific attributes that we need for the test!Key here is speed and ease of development, not what is the best way in theory! In real-lifetesting there are hardly any best practices IMO. Testing is always very context dependent.Is it good to test methods with Eloquent ORM/SQL statements and how to do it without a database?Eloquent is the ORM layer for Laravel thus the questionbecomes the same as the previous one.! When the application is dependent on the DB, which in theircase is, then it makes sense to use a database during testing!For Feature tests isn't it better to to test them without a DB and b/c we have more businesslogic there. For them we must be certain that we call the correct methods?Again, same as the previous one. Use the database when you have to! And two questions:1) Does the database messes your testing up in some way? Does it prevent you from doing something? If yes, just debug the hell out of it, figure out what happens and then figure out how to fix it2) What on Earth is we must be certain that we call the correct methods mean? (I am writing this as personal notes before the actual session took place). I suspect that this is the more general am I testing for the right thing question which inexperienced engineers ask. My rule of thumb is: check what do you assert on. Are you asserting that the record was created in the DB (so verifying explicitly permissions, DB setup, ORM correctness) or that the result of the operation mathes what the business logic expects (so verifying explicitly the expected behavior and implicitly that all the layers below managed to work so the data was actually written to disk)? At times both may be necessary (e.g. large system, lots of cachine, eventual consistency) but more often than not we need to actually assert on the business logic.Example:technical validation: user tries to register an account, assert email was sent orbusiness/behavior validation: user tries to register an account, after confirming their intent they are able to loginOptimization for faster execution time, parallel executionParallel testing is no, no, no in my book! If you do not understand why something is slowtrowing more instances at it increases your complexity and decreases the things you dounderstand and subsequently are able to control and modify!Check-out this excellent presentation byEmanuil Slavov atGTAC 2016. The most important thing Emanuil says is that a fast test suite is the result of manyconscious actions which introduced small improvements over time. His team had assignedthemselves the task to iteratively improve their test suite performance and at every stepof the way they analyzed the existing bottlenecks and experimented with possible solutions.The steps in particular are (on a single machine):Execute tests in dedicated environment;Start with empty database, not used by anything else; This also leads toadjustments in your test suite architecture and DB setup procedures;Simulate and stub external dependencies like 3rd party services;Move to containers but beware of slow disk I/O;Run database in memory not on disk because it is a temporary DB anyway;Don't clean test data, just trash the entire DB once you're done; Will also require adjustments to tests, e.g. assert the actual object is there, not that there are now 2 objects;Execute tests in parallel which should be the last thing to do!Equalize workload between parallel threads for optimal performance;Upgrade the hardware (RAM, CPU) aka vertical scaling; I would move this before parallel execution b/c test systems usually have less resources;Add horizontal scaling (probably with a messaging layer);There are other more heuristical approaches like not running certain tests oncertain branches and/or using historical data to predict what and where to execute.If you want to be fancy couple this with an ML algorithm but beware thatthere are only so many companies in the world that will have any real benefit from this.You and I probably won't. Read more about GTAC 2016.Testing when touching the file system or working with 3rd party cloud providersIf touching the filesystem is occasional and doesn't slow you down ignore it!But also make sure you do have a fast disk, this is also true for DB access.Try to push everything to memory, e.g. large DB buffers, filesystem mounted in memory,all of this is very easy in Linux. Presumption here is that these are temporary objectsand you will destroy them after testing.Now if the actual behavior that you want to test is working with a filesystem (e.g.producing files on disk) or uploading files to a cloud provider there isn't much youcan do about it! This is a system type of test where you rely on integration witha 3rd party solution.For example for django-s3-cacheyou need to provide your Amazon S3 authentication tokens before you can executethe test suite. It will comminicate back and forth with AWS and possibly leave someartifacts there when it is done!Same thing for lorax, where the essenceof the SUT is to build Linux images ready to be deployed in the cloud! Checkout thePR above and click the View details button at the bottom right to see the varioustest statuses for this PR:Travis CI - pylint + unit test + some integration type tests (cli talks to API server)very basic sanity tests (invoking the application cli via bash scripts). This hits the network to refresh with RPM package data from Fedora/CentOS repositories.Jenkins jobs for AWS, Azure, OpenStack, Vmware, other (tar, Docker, stand-alone KVM). These will run the SUT, get busy for about 10 minutes to compose a cloud image of the chosen format, extract the file to a local directory, upload to the chosen cloud vendor, spin up a VM there and wait for it to initialize, ssh to the VM and perform final assertions, e.g. validating it was able to boot as we expected it to. This is for x86_64 and we need it for Power, s390x and ARM as well! I am having troubles even finding vendors that support all of these environments! Future releases will support even more cloud environments so rinse and repeat!My point is when your core functionality depends on a 3rd party provider your testing willdepend on that as well. In the above example I've had the scenario where VMs in Azure weretaking around 1hr to boot up. At the time we didn't know if that was due to us not integratingwith Azure properly (they don't use cloud-init/NetworkManager but their own code which wehad to install and configure inside the resulting Linux image) or because of infrastructureissues. It turned out Azure was having networking trouble at the time when our teamwas performing final testing before an important milestone. Sigh!With what tests (Feature or Unit) should I start before refactoring?So you know you are going to refactor something but it doesn't have [enough] tests?How do you start? The answer will ellude most developers. You do not start by definingthe types of testing you should implement. You start with analyzing the existing behavior:how it works, what conditions it expects, what input data, what constraints, etc. This isvery close to black-box testing techniques like decision tables, equivalence partitioning, etcwith the added bonus that you have access to the source code and can more accuratelyfigure out what is the actual behavior.Then you write test scenarios (Given-When-Then or Actions 1, 2, 3 + expected results).You evaluate these scenarios if they encompass all the previously identified behaviorand classify the risk assiciated with them. What if Scenario X fails after refactoring?Cloud be the code is wrong, could be the scenario is incomplete. How does that affectschedule, user experience, business risk (often money), etc.Above is tipically the job of a true tester as illustrated by this picture fromIngo Philipp, full presentationhereThen and only then you sit down and figure out what types of tests are needed toautomate the identified scenarios, implement them and start refactoring.What are inexperienced developers missing most often when writing tests?How to make my life easier if I am inexperienced and just starting with testing?See the picture above! Developers, even experienced ones have a different mind setwhen they are working on fixing code or adding new features. What I've seen most oftenly isadding tests only for happy paths/positive scenarios and not spending enough time toevaluate and exercise all of the edge cases.True 100% test coverage is impossible in practice and there are so many things that cango wrong. Developers are typically not aware of all that because it is tipically not theirjob to do it.Also testing and development require different frame of mind. I myself am a tester but I dohave formal education in software engineering and regularly contribute as developer to variousprojects (2000+ GitHub conributions as of late). When I revisit some tests I've writtenI often find they are pointless and incorrect. This is because at the time I've beenthinking "how to make it work", not "how to test it and validate it actually works".For an engineer without lots of experience in testing I would recommend to always startwith a BDD exercise. The reason is it will put you in a frame of mind to think aboutexpected behavior from the SUT and not think about implementation. This is the basisfor asking questions and defining good scenarios. Automation testing is a means ofexpression, not a tool to find a solution to the testing problem!Check-out this BDD experiment I didand also the resourceshere.Inside-out(Classi approach) vs Outside-in(Mockist approach)? When and why?These are terms associated with test driven development (TDD). A quick search revealsan excellent article explaining this question.Inside Out TDD allows the developer to focus on one thing at a time.Each entity (i.e. an individual module or single class) is created until the wholeapplication is built up. In one sense the individual entities could be deemedworthless until they are working together, and wiring the system together at alate stage may constitute higher risk. On the other hand, focussing on one entity at a timehelps parallelise development work within a team.This sounds to me is more suitable for less experienced teams but does require a strongsenior personel to control the deliverables and steer work in the right direction.Outside In TDD lends itself well to having a definable route through the system from thevery start, even if some parts are initially hardcoded.The tests are based upon user-requested scenarios, and entities are wired together fromthe beginning. This allows a fluent API to emerge and integration is proved from the start of development.By focussing on a complete flow through the system from the start, knowledge of how differentparts of the system interact with each other is required. As entities emerge,they are mocked or stubbed out, which allows their detail to be deferred until later.This approach means the developer needs to know how to test interactions up front, either througha mocking framework or by writing their own test doubles. The developer will then loop back,providing the real implementation of the mocked or stubbed entities through new unit tests.I've seen this in practice in welder-web. This is theweb UI for the above mentioned cloud image builder. The application was developed iterativelyover the past 2 years and initially many of the screens and widgets were hard-coded.Some of the interactions were not even existing, you click on a button and it does nothing.This is more of an MVP, start-up approach, very convenient for frequent product demoswhere you can demonstrate that some part of the system is now working and it showsreal data!However this requires a relatively experienced team both testers and developersand relatively well defined product vision. Individual steps (screens, interactions, components)may not be so well defined but everybody needs to know where the product should goso we can adjust our work and snap together.As everything in testing the real answer is it depends and is often a mixture of the two.What is the difference between a double, stub, mock, fake and spy?These are classic unit testing terms defined by Gerard Meszaros in his bookxUnit Test Patterns, more precisely inTest Double Patterns.These terms are somewhat confusing and also used interchangeably in testing frameworksso see below.Background:In most real-life software we have dependencies:on other libraries, on filesystems, on database, on external API, on another class(private and protected methods), etc.Pure unit testing (see definition at the top) is not concerned with these because wecan't control them. Anytime we cross outside the class under test(where the method which is unit tested is defined) we have a dependency thatwe need to deal with. This may also apply to integration type tests, e.g. I don't wantto hit GitHub every time I want to test my code will not crash when we receive aresponse from them.From xUnit Test PatternsFor testing purposes we replace the real dependent component (DOC) with our Test Double.Depending on the kind of test we are executing, we may hard-code the behavior of the Test Doubleor we may configure it during the setup phase. When the SUT interacts with the Test Double,it won't be aware that it isn't talking to the real McCoy,but we will have achieved our goal of making impossible tests possible.Example: testing discount algorithmReplace the method figuring out what kind of discount the customer is eligible to with a hard-coded test double: e.g. -30% and validate the final price matches!In another scenario use a second test double which applies 10% discount when you submit a coupon code. Verify the final price matches expectations!Here we don't care how the actual discount percentage is determined. This is adependency. We want to test that the discount is actually applied properly, e.g.there may be 2 or 3 different discounts and only 1 applies or no discount policyfor items that are already on sale. This is what you are testing.Important: when the applying algorithm is tightly coupled with parts of the systemthat select what types of discounts are available to the customer that means your codeneeds refactoring since you will be not able to crate a test double (or it will be very hardto do so).A Fake Object is a kind of Test Double that is similar to a Test Stub in many waysincluding the need to install into the SUT a substitutable dependency but while a Test Stubacts as a control point to inject indirect inputs into the SUT the Fake Object does not.It merely provides a way for the interactions to occur in a self-consistent manner.Variations (see here):Fake database;In-memory database;Fake web service (or fake web server in the case of Django);Fake service layer;Use of a Test Spy is a simple and intuitive way to implement an observation point thatexposes the indirect outputs of the SUT so they can be verified.Before we exercise the SUT, we install a Test Spy as a stand-in for depended-on component (DOC)used by the SUT. The Test Spy is designed to act as an observation point by recording themethod calls made to it by the SUT as it is exercised. During the result verification phase,the test compares the actual values passed to the Test Spy by the SUT with the expected values.Note: a test spy can be implemented via test double, exposing some of the functionalityto the test framework, e.g. expose internal log messages so we can validate them or can bea very complex mock type of object.From The Art of Unit TestingA stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency itself.A mock object is a fake object in the system that decides whether the unit testhas passed or failed. It does so by verifying whether the object under test (e.g. a method)interacted as expected with the fake object.Stubs can NEVER fail a test! The asserts are aways against the class/method under test.Mocks can fail a test! We can assert how the class/method under test interacted withthe mock.Example:When testing a registration form, which will send a confirmation email:Checking that invalid input is not accepted - will not trigger send_mail() so we usually don't care about the dependency;Checking valid input will create a new account in the DB - we stub-out send_mail() because we don't want to generate unnecessary email traffic to the outside world.Checking if a banned email address/domain can register - we mock send_mail() so that we can assert that it was never called (together with other assertions that a correct error message was shown and no record was created in the database);Checking that valid, non-banned email address can register - we mock send_mail() and later assert it was called with the actual address in question. This will verify that the system will attempt to deliver a confirmation email to the new user!To summarize:- When using mocks, stubs and fake objects we should be replacing external dependencies of the software under test, not internal methods from the SUT!.- Beware that many modern test framework use the singular term/class name Mock to refer to all of the things above. Depending on their behavior they can be true mocks or pure stubs.More practical examples with code:Mocking Django AUTH_PROFILE_MODULE without a DatabaseBad Stub Design in DNFBad Stub Design in DNF, Pt.2Beware of Double Stubs in RSpecHow do we test statistics where you have to create lots of records in different states tomake sure the aggregation algorithms work properly?Well there isn't much to do around this - create all the records and validate your queries!Here the functionality is mostly filter records from the database, group and aggregate themand display the results in table or chart form.Depending on the complexity of what is displayed I may even go without actually automatingthis. If we have a representative set of test data (e.g. all possible states and values)then just make sure the generated charts and tables show the expected information.In automation the only scenario I can think about is to re-implement the statisticsalgorithm again! Doing a select() && stats() and assert stats(test_data) == stats()doesn't make a lot of sense becase we're using the result of one method to validateitself! It will help discover problems with select() but not with the actualcalculation!Once you reimplement every stats twice you will see why I tend to go for manualtesting here.How to test various filters and searches which need lots of data?First ask yourself the question - what do you need to test for?That all values from the webUI are passed down to the ORMThat the ORM will actually return the records in question (e.g. active really means active not the opposite)which columns will be displayed (which is a UI thing)For Kiwi TCMS search pages we don't do any kind of automated testing! These arevery static HTML forms that pass their values to a JavaScript function which passesthem to an API call and then renders the results! When you change it you have to validate itmanually but nothing more really.It is good to define test scenarios, especially based on customer bug reports butessentially you are checking that a number of values are passed around which eitherworks or it doesn't. Not much logic and behavior to be tested there! Think like a tester, notlike a developer!How to test an API? Should we use an API spec schema and assert the server sideand client side based on it?This is generally a good idea. The biggest troubles with APIs is that they change withoutwarning, sometimes in an incompatible way and clients are not aware of this. A few things you can do:Use API versioning and leave older versions arround for as long as necessary. Facebook for example keeps their older API versions around for several years.Use some sort of contract testing/API specification to validate behavior. I find value here to have a test suite which explicitly exercises the external API in the desired ways (full coverage of what the application uses) so it can detect when something breaks. If this is not 100% all the time it will become useless very quickly.Record and replay may be useful at scale, Twitter uses similar approach with anonimizing the actual values being sent around and also accounting for parameter types, e.g. an int X can receive only ints and if someone tries to send a string that was probably an error. Twitter however has access to their entire production data and can perform such kind of sampling.What types of tests do QA people write? (I split this from the next question).As should be evident by my many example nobody stops us from writing any kind of testin any kind of programming language. This only depends on personal skills and the specifics ofthe project we work on.Please refer back to the codec-rpm, lorax and welder-web projects. These are componentsfrom a larger product named Composer which builds Linux cloud images.welder-web isthe front-end which integrates with Cockpit. This is written with React.js, includes somecomponent type tests (I think close to unit tests but I haven't worked on them), end-to-endtest suite (again JavaScript) similar to what you do with Selenium - fire up the browserand click on widgets.lorax is a Python based backend with unit and integration tests in Python. I mostly workon testing the resulting cloud images which uses a test framework for Bash script,ansible, Docker and a bunch of vendor specific cli/api tools.codec-rpm is smaller component from another backend called BDCS which is written in Haskell.As I showed you I've done some unit tests (and bug fixes even) and for bdcs-cli I didwork on similar cloud image tests in bash script. This component is now frozen but when/ifit picks up all the existing bash scripts will need to be ported plus any unit testswhich are missing will have to be reimplemented in Haskell. Whoever on the team isfree will get to do it.At the very beginning we used to have a 3rd backend written in Rust but that was abandonedrelatively quickly.To top this off a good QE person will often work on test related tooling to support theirteam. I personally have worked on Cosmic-Ray -mutation testing tool for Python used by Amazon and others, I am the current maintainer ofpylint-django - essentially a developer tool butI like to stretch its usage with customized plugins and of courseKiwi TCMS which is a test management tool.How do they (testers) know what classes I am going to create so they are able towrite tests for them beforehand?This comes from test driven development practices. In TDD (as everywhere in testing)you will start with analisys what components are needed and how they will work.Imagine that I want you to implement a class that represents a cash-desk which cantake money and store them, count them, etc. Imagine this is part of a banking applicationwhere you can open accounts, transfer money between them, etc.With TDD I start by implementing tests for the desired behavior. I will import solutionand I will create an object from the Bill class to represent a 5 BGN note.I don't care how you want to name your classes! The tests serve to enforce the interfaceI need you to implement: module name, classes in the module, method names, behavior.Initially in TDD the tests will fail. Once functionality becomes to be implemented pieceby piece tests will start passing one by one! In TDD testers don't know, we expect developersto do something otherwise tests fail and you can't merge!In practice there is a back-and-forth process!The above scenario is part of my training courses where I give students homeworkassignments and I have already provided automated test suites for the classes andmodules they have to implement. Once the suite reports PASS I know the studenthas at least done good enough implementation to meet the bare minimum of requirements.See an example for the Cash-Desk and Bank-Account problems athttps://github.com/atodorov/qa-automation-python-selenium-101/tree/master/module04How to test functionality which is date/time dependent?For example a certain function should execute on week days but not on the weekend. How do wetest this? Very simple, we need to time travel, at least out tests do.Check-out php-timecop andthis introductory article.Now that we know what stubs are we simply use a suitable library and stub outdate/time utilities. This essentially gives you the ability to freeze thesystem clock or time travel backwards and forwards in time so you can executeyour tests in the appropriate environment. There are many such time-travel/time-freezelibraries for all popular programming languages.Given the two variations of the method below:public function updateStatusPaid(){ $this->update([ 'date_paid' => now(), 'status' => 'paid' ]);}public function updateStatusPaid(){ $this->date_paid = now(); $this->status = 'paid'; $this->save();}How do we create a test which validates this method without touching the database?Also we want to be able to switch between method implementations without updating the test code!Let's examine this in details. Both of these methods change field values for the $this objectand commit that to storage! There is no indication what happened inside other than theobject fields being changed in the underlying storage medium.Options:1) Mock the save() method or spy the entire storage layer. This will give you faster speed of execution but more importantly will let you examine the values before they leave the process memory space. Your best bet here is replacing the entire backend portion of the ORM layer which talks to the database. Drawback is that data may not be persistent between test executions/different test methods (depending on how they are executed and how the new storage layer works) so chained tests, which depend on data created by other tests or other parts of the system may break.2) Modify your method to provide more information which can be consumed by the tests. This is called engineering for testability. The trouble with this method is that it doesn't expose anything to the outside world so the only way we can check that something has changed is to actually fetch it from storage and assert that it is different.3) Test with the database included. The OP presumes touching a database during testing is a bad thing. As I've already pointed out this is not necessarily the case. Unless your data is so big that it is spread around cluster nodes in several shards using a database for testing is probably the easiest thing you can do.Now to the second part of the question: if your test is not tightly coupled with the methodimplementation then it will not need to be changed once you change the implementation. That isif you are asserting on independent system state then you should be fine.Current problemsThis is a list of problems we discussed, my views on them and similar items I've seen in the past.They are valid across the board for many types of companies and teams and my only recommendationhere is to analyze the root of your problems and act to resolve them. IMO a lot of the timesthe actual problems stem from not understanding the roots of what we are trying to validate,not from technological limitations.Background:Company is delivering a digital product, over e-mail, without a required login procedure.There are event ticket sites which work like this.Problem: email delivery fails, customer closes their browser and they can't get back towhat they paid for. Essentially customers locks themselves out of the product theypaid for.This is UX problem. Email is inherently unreliable and it can break at many steps alongthe way. The product is not designed to be fault tolerant and to provide a way for the customerto retrieve their digital products. Options include:Browser cookies to remember orders in the last X daysWell designed error/warning messages about possible data lossRequire login (email or social) or other means of backup delivery (mobile phone, second email address, etc)Login is sometimes required by regulatory bodies (KYC practices) and is also a good starting point for additional marketting/relationship building activitiesMonitoring of email delivery providers and their operation. This is a business critical functionality so it must be treated like that.Product needs enough input data from customer to produce a deliverable.Problem: Sometimes enough may not be enough, that is the backend algorithm thinks it hass everythingand then it runs into some corner case from which it can't recover and is not able todeliver its results to the customer.I see this essentially as an UX problem:Ask customer for more info at the beginning - annoying, slows down initial product adoption, may break the conversion funnel;Calculate what we can and randomly pick options from DB (curated or based on statistics) and present them to customer;Previous point + allow the customer to proceed or go back and refine the selection which was automatically made for them - this is managing the UX experience around the technological limitationsInfrastructure problems: site doesn't open (not accessible for some reason), big email queue,many levels of cache (using varnish)Agressive monitoring of all of these items with alerts and combined charts. This is businesscritical functionality and we need to always know what is the status of it. If you want tobe fancy couple this with an ML/AI algorithm which will predict failures in advance so youcan be on alert before that happens.More importantly each problem in production must be followed by a post-mortem session (more on that later).Integration with payment processors: how do you test this in production ?Again agressive monitoring when/if these integrations are up and running, then:Design a small test suite which goes directly on the website and examines if all payment optionsare available. This will catch scenarios where you claim PayPal is supported but for somereason the form didn't load. The problem may not be on your side! Check preferences percountry (may have been editted by admin on the backend), make sure what you promised isalways there.I've used similar approach in a trading startup. We run the suite once every hour directlyagains prod. Results were charted in Grafana together with the rest of the monitoring metrics.In the first two days we found that the HTML form provided by the payment processor was changingall the time - this was supposed to be stable. In the first week we discovered the paymentprocessor had issues on their own and were down for couple of hours during the night our time zone.There isn't much you can do when you rely on 3rd party services but you can either- cache and retry later, masking the backend failures from the user at your own risk (payment may not be authorized later)- do not accept payment or at least warn the customer if you are seeing/predicting 3rd party issuesProblem: customers cancelling their payments after product was receivedYes, in many countries you can do so many days after you paid and got access to something.I have done so myself after non-delivery of items.In case this is deliberate action from the customer there isn't much you can do. In case it isbecause they were frustrated due to problems overzealous monitoring and communicating back tothe customers will probably help.Localization problems, missing translations, UI doesn't look good, missing imagesUnless your test team speaks the language they can't understand shit. Best options IMO:Allow translator team to preview their work before it is comitted to the current version; A simple staging server will work for this. This is easy to integrate with any translation system;Use machine checks: missing format strings, unfilled data (e.g. missing translations), 404 URLs. This is cheap to execute and can be done on Save and provide immediate feedback;Many systems provide the option to Review & Approve the work of another peer;Some visual testing tools (I don't have much experience here but I know they exist) which will detect strings that are too long and do not fit inside buttons and other widgets. This is more in the category of visual layout testing.Problem: on mobile version, after new feature was added the 'Buy' button was overlayedby another widget and was not visibleThis means that:previously it was not defined what testing will be performed for the new feature;also that this 'Buy' button was not considered business critical functionality, which it is;the person who signed-off on this page was careless;Test management tools like Kiwi TCMS can help you with organizingand documenting what needs to be tested. However, regardless of the system used, everythingstarts with identifying which functionality is critical and must always be present! This isthe job of a tester!Once identified as critical you could probably use some tools for visual comparison tomake sure this button is always available on this (other) pages. Again a person mustidentify all the possible interactions we want to check for.Problem: we released at 18:30 on Friday and went home. We discovered email deliverywas broken at 10:00 the next dayObviously this wasn't well tested since it broke. The root cause must be analized anda test for it added.Also we are missing a minitoring metric here. If you are sending lots of emails thena drop under, say 50K/hour probably means problems! What's the reason the existing monitoringtools didn't trigger? Investigate and fix it.Last - do not push & throw over the fence. This is the silo mentality of the past.A small team can allow itself to make these mistakes just a few times, then comapny goes out ofbusiness and the people who didn't care enough to resolve the problems go out of a job.Make a policy which gives you enough time to monitor production and revert in case ofproblems. There are many reasons lots of companies don't release on Friday (while others do).The point here is to put the policy and entire machinery in place so you can deal withproblems when they arise. If you are not equipped to deal with these problemson late Friday night (or any other day/night of the week) you should not be making releases then.Problem: how do we follow-up after a blunder?In any growing team or company, especially a startup there is more demand to work on newfeatures than maintain existing code, resolve problems or work on non-visible items liketesting and monitoring which will help you the next time there are problems.An evaluation framework like theSwiss cheese model is a good placeto start. Prezi uses it extensively. Various sized holes are the different root causes which will lead to a problem:missing testsundocumented release proceduremerged without code reviewincomplete feature specificationtoo much work, task overloadThe cheese layers can be both technical and organizational. One of them can bethe business takeholders organization: wanting too much, not budgeting time for othertasks, tight marketting schedule, etc.Once a post-mortem is held and the issues at hand analyzed you need to come upwith a plan of action. These are your JIRA tickets about what to do next.Some will have immediate priority others will be important 1 year from now.Once the action items are entered into your task tracking software the only thing leftto do is priritizing them accordingly.Important: tests, monitoring, even talking about a post-mortem and other seeminglynon-visible tasks are still important. If the business doesn't budget time for theircompletion it will ultimately fail! You can not sustain adding new features quicklyfor an extended period of time without taking the time to resolve your architecture,infrastructure, team and who knows what other issues.Time and resources should be evaluated and assigned according to the importance of the taskand the various risks assiciated with it. This is no different from when we doplanning for new features. Consider having the ability to analyze, adapt and resolveproblems as the most important feature of your organization! Posted by Alexander Todorov on Fri 05 April 2019 There are comments. How to authenticate Ansible with Azure As I am working on cloud image testing forComposer I need to create scripts that can provisionvirtual machines in multiple cloud platforms. Instead of using their API directlyI can reuse the vast majority ofAnsible cloud modules.There are modules for Azure of course however they poorly explainhow to configure authentication. Ansible docs say:For authentication with Azure you can pass parameters,set environment variables or use a profile stored in~/.azure/credentials. Authentication is possible usinga service principal or Active Directory user. To authenticatevia service principal, pass subscription_id, client_id, secretand tenant or set environment variables AZURE_SUBSCRIPTION_ID,AZURE_CLIENT_ID, AZURE_SECRET and AZURE_TENANT.This is how you go about configuring these variables.First install azure-cli tools:# rpm --import https://packages.microsoft.com/keys/microsoft.asc# echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo# yum install azure-clithen login:$ az loginTo sign in, use a web browser to open the pagehttps://microsoft.com/devicelogin and enter the code XXXXXXXXX to authenticate.[ { "cloudName": "AzureCloud", "id": "8d026bb1-.....", "isDefault": true, "name": "Pay-as-you-go", "state": "Enabled", "tenantId": "9f340302-......", "user": { "name": "atodorov@....", "type": "user" } }]Here id==AZURE_SUBSCRITION_ID and tenantId==AZURE_TENANT! Next you needclient id and secret before Ansible can be able to authenticate with Azure!In fact you need to register an Active Directory Service Principalwhich will authenticate with the Azure REST API, in other words whenexecuting Ansible commands in your shell (or via test script) that will betreated as an application which must be allowed access to Azure resources.From the command line this is done by:$ az ad sp create-for-rbac --name http://ansible-atodorov --role owner --scopes "/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME"{ "appId": "f86af23a-......", "displayName": "ansible-atodorov", "name": "http://ansible-atodorov", "password": "37d908aa-.......", "tenant": "9f340302-........."}Note: resource group is an Azure term, you can find more about ithere.In this example appId==AZURE_CLIENT_ID and password==AZURE_SECRET. After exportingthese environment variables you should be able to use Ansible to upload blobs toAzure or start virtual machines:$ export AZURE_SUBSCRIPTION_ID=8d026bb1-.....$ export AZURE_TENANT=9f340302-..............$ export AZURE_CLIENT_ID=f86af23a-...........$ export AZURE_SECRET=37d908aa-..............$ ansible localhost -m azure_rm_storageblob -a "resource_group=composer storage_account_name=composerredhat container=composerredhat blob=linux.vhd src=linux.vhd blob_type=page"Thanks for reading and happy testing! Posted by Alexander Todorov on Fri 16 November 2018 There are comments. Introducing pylint-django 2.0 Today I have released pylint-django version 2.0 on PyPI.The changes are centered around compatibility with the latest pylint 2.0 andastroid 2.0 versions. I've also bumped pylint-django's version number to reflactthat.A major component, class transformations, was updated so don't be surprised ifthere are bugs. All the existing test cases pass but you never know what sortof edge case there could be.I'm also hosting a workshop/corporate training about writing pylint plugins.If you are interested see this page!Thanks for reading and happy testing! Posted by Alexander Todorov on Tue 24 July 2018 There are comments. Upstream rebuilds with Jenkins Job Builder I have been working on Weldr for some time now.It is a multi-component software with several layers built on top ofeach other as seen on the image below.One of the risks that we face is introducing changes indownstream components which are going to break something up the stack!In this post I am going to show you how I have configuredJenkins to trigger dependent rebuilds and report all of the statusesback to the original GitHub PR. All of the code below is Jenkins Job Builderyaml.bdcs is the first layer of our software stack. It provides command lineutilities. codec-rpm is a library component that facilitates workingwith RPM packages (in Haskell). bdcs links to codec-rpm when it is compiled,bdcs uses some functions and data types from codec-rpm.When a pull request is opened against codec-rpm and testing completes successfullyI want to reuse that particular version of the codec-rpm library andrebuild/test bdcs with that.YAML configurationAll jobs have the following structure: -trigger -> -provision -> -runtest -> -teardown.This means that Jenkins will start executing a new job when it gets triggered byan event in GitHub (commit to master branch or new pull request), then it willprovision a slave VM in OpenStack, execute the test suite on the slave and destroyall of the resources at the end. This is repeated twice: for master branch and forpull requests! Here's how the -runtest jobs look:- job-template: name: '{name}-provision' node: master parameters: - string: name: PROVIDER scm: - git: url: 'https://github.com/weldr/{repo_name}.git' refspec: ${{git_refspec}} branches: - ${{git_branch}} builders: - github-notifier - shell: | #!/bin/bash -ex # do the openstack provisioning here # NB: runtest_job is passed to us via the -trigger job - trigger-builds: - project: '${{runtest_job}}' block: true current-parameters: true condition: 'SUCCESS' fail-on-missing: true- job-template: name: '{name}-master-runtest' node: cinch-slave project-type: freestyle description: 'Build master branch of {name}!' scm: - git: url: 'https://github.com/weldr/{repo_name}.git' branches: - master builders: - github-notifier - conditional-step: condition-kind: regex-match regex: "^.+$" label: '${{UPSTREAM_BUILD}}' on-evaluation-failure: dont-run steps: - copyartifact: project: ${{UPSTREAM_BUILD}} which-build: specific-build build-number: ${{UPSTREAM_BUILD_NUMBER}} filter: ${{UPSTREAM_ARTIFACT}} flatten: true - shell: | #!/bin/bash -ex make ci publishers: - trigger-parameterized-builds: - project: '{name}-teardown' current-parameters: true - github-notifier- job-template: name: '{name}-PR-runtest' node: cinch-slave description: 'Build PRs for {name}!' scm: - git: url: 'https://github.com/weldr/{repo_name}.git' refspec: +refs/pull/*:refs/remotes/origin/pr/* branches: # builds the commit hash instead of a branch - ${{ghprbActualCommit}} builders: - github-notifier - shell: | #!/bin/bash -ex make ci - conditional-step: condition-kind: current-status condition-worst: SUCCESS condition-best: SUCCESS on-evaluation-failure: dont-run steps: - shell: | #!/bin/bash -ex make after_success publishers: - archive: artifacts: '{artifacts_path}' allow-empty: '{artifacts_empty}' - conditional-publisher: - condition-kind: '{execute_dependent_job}' on-evaluation-failure: dont-run action: - trigger-parameterized-builds: - project: '{dependent_job}' current-parameters: true predefined-parameters: | UPSTREAM_ARTIFACT={artifacts_path} UPSTREAM_BUILD=${{JOB_NAME}} UPSTREAM_BUILD_NUMBER=${{build_number}} condition: 'SUCCESS' - trigger-parameterized-builds: - project: '{name}-teardown' current-parameters: true - github-notifier- job-group: name: '{name}-tests' jobs: - '{name}-provision' - '{name}-teardown' - '{name}-master-trigger' - '{name}-master-runtest' - '{name}-PR-trigger' - '{name}-PR-runtest'- job: name: 'codec-rpm-rebuild-bdcs' node: master project-type: freestyle description: 'Rebuild bdcs after codec-rpm PR!' scm: - git: url: 'https://github.com/weldr/codec-rpm.git' refspec: +refs/pull/*:refs/remotes/origin/pr/* branches: # builds the commit hash instead of a branch - ${ghprbActualCommit} builders: - github-notifier - trigger-builds: - project: 'bdcs-master-trigger' block: true predefined-parameters: | UPSTREAM_ARTIFACT=${UPSTREAM_ARTIFACT} UPSTREAM_BUILD=${UPSTREAM_BUILD} UPSTREAM_BUILD_NUMBER=${UPSTREAM_BUILD_NUMBER} publishers: - github-notifier- project: name: codec-rpm dependent_job: '{name}-rebuild-bdcs' execute_dependent_job: always artifacts_path: 'dist/{name}-latest.tar.gz' artifacts_empty: false jobs: - '{name}-tests'Publishing artifactsmake after_success is responsible for creating a tarball if codec-rpm test suitepassed. This tarball gets uploaded as artifact into Jenkins and we can make use of it later!Inside -master-runtest I have a conditional-step inside the builders section whichwill copy the artifacts from the previous build if they are present. Notice that I copyartifacts for a particular job number, which is the job for codec-rpm PR.Making use of local artifacts is handled inside bdcs' make ci because it isper-project specific and because I'd like to reuse my YAML templates.Reporting statuses to GitHubFor github-notifier to be able to report statuses back to the pull requestthe job needs to be configured with the git repository this pull request came from.This is done by specifying the same scm section for all jobs that are related andcurrent-parameters: true to pass the revision information to the other jobs.This also means that if I want to report status from codec-rpm-rebuild-bdcs thenit needs to be configured for the codec-rpm repository (see yaml) but somehowit should trigger jobs for another repository!When jobs are started via trigger-parameterized-builds their statuses are reportedseparately to GitHub. When they are started via trigger-builds there should be onlyone status reported.Trigger chain for dependency rebuildsWith all of the above info we can now look at the codec-rpm-rebuild-bdcs job.It is configured for the codec-rpm repository so it will report its status to the PRIt is conditionally started after codec-rpm-PR-runtest finishes successfullyIt triggers bdcs-master-trigger which in turn will rebuild & retest the bdcs component. Additional parameters specify whether we're going to use locally built artifacts or attempt to download then from HackageIt uses block: true so that the status of codec-rpm-rebuild-bdcs is dependent on the status of bdcs-master-runtest (everything in the job chain uses block: true because of this)How this looks like in practiceI have opened codec-rpm #39to validate my configuration. The chain of jobs that gets executed in Jenkins is:--- console.log for bdcs-master-runtest ---Started by upstream project "bdcs-jslave-1-provision" build number 267originally caused by: Started by upstream project "bdcs-master-trigger" build number 133 originally caused by: Started by upstream project "codec-rpm-rebuild-bdcs" build number 25 originally caused by: Started by upstream project "codec-rpm-PR-runtest" build number 77 originally caused by: Started by upstream project "codec-rpm-jslave-1-provision" build number 178 originally caused by: Started by upstream project "codec-rpm-PR-trigger" build number 118 originally caused by: GitHub pull request #39 of commit b00c923065e367afd5b7a7cc068b049bb1ed25e1, no merge conflicts.Statuses are reported on GitHub as follows:default is coming from the provisioning step and I think this is some sort of a bugor misconfiguration of the provisioning job. We don't really care about this.On the picture you can see that codec-rpm-PR-runtest was successful butcodec-rpm-rebuild-bdcs was not. The actual error when compiling bdcs is:src/BDCS/Import/RPM.hs:110:24: error: * Couldn't match type `Entry' with `C8.ByteString' Expected type: conduit-1.2.13.1:Data.Conduit.Internal.Conduit.ConduitM C8.ByteString Data.Void.Void Data.ContentStore.CsMonad ([T.Text], [Maybe ObjectDigest]) Actual type: conduit-1.2.13.1:Data.Conduit.Internal.Conduit.ConduitM Entry Data.Void.Void Data.ContentStore.CsMonad ([T.Text], [Maybe ObjectDigest]) * In the second argument of `(.|)', namely `getZipConduit ((,) ZipConduit filenames ZipConduit digests)' In the second argument of `($)', namely `src .| getZipConduit ((,) ZipConduit filenames ZipConduit digests)' In the second argument of `($)', namely `runConduit $ src .| getZipConduit ((,) ZipConduit filenames ZipConduit digests)' |110 | .| getZipConduit ((,) ZipConduit filenames | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^...That is because PR #39 changes the return type of Codec.RPM.Conduit::payloadContentsCfrom Entry to C8.ByteString.Thanks for reading and happy testing!social image CC by https://pxhere.com/en/photo/226978 Posted by Alexander Todorov on Fri 06 July 2018 There are comments. Introducing pylint-django 0.8.0 Since my previous post was aboutwriting pylint pluginsI figured I'd let you know that I've releasedpylint-django version 0.8.0over the weekend. This release merges all pull requests which werepending till now so make sure to read the change log.Starting with this release Colin Howe and myself are the newmaintainers of this package. My immediate goal is to triage all of theopen issue and figure out if they still reproduce. If yes try tocome up with fixes for them or at least get the conversation going again.My next goal is to integrate pylint-django withKiwi TCMS and start resolving all the 4000+errors and warnings that it produces.You are welcome to contribute of course. I'm also interested in hosting aworkshop on the topic of pylint plugins.Thanks for reading and happy testing! Posted by Alexander Todorov on Mon 22 January 2018 There are comments. How to write pylint checker plugins In this post I will walk you through the process of learning how to writeadditional checkers for pylint!PrerequisitesRead Contributing to pylint to get basic knowledge of how to execute the test suite and how it is structured. Basically call tox -e py36. Verify that all tests PASS locally!Read pylint's How To Guides, in particular the section about writing a new checker. A plugin is usually a Python module that registers a new checker.Most of pylint checkers are AST based, meaning they operate on the abstract syntax tree of the source code. You will have to familiarize yourself with the AST node reference for the astroid and ast modules. Pylint uses Astroid for parsing and augmenting the AST.NOTE: there is compact and excellent documentation provided by the Green Tree Snakes project. I would recommend the Meet the Nodes chapter.Astroid also provides exhaustive documentation and node API reference. WARNING: sometimes Astroid node class names don't match the ones from ast!Your interactive shell weapons are ast.dump(), ast.parse(), astroid.parse() and astroid.extract_node(). I use them inside an interactive Python shell to figure out how a piece of source code is parsed and converted back to AST nodes! You can also try this ast node pretty printer! I personally haven't used it.How pylint processes the AST treeEvery checker class may include special methods with namesvisit_xxx(self, node) and leave_xxx(self, node) where xxx is the lowercasename of the node class (as defined by astroid). These methods are executedautomatically when the parser iterates over nodes of the respective type.All of the magic happens inside such methods. They are responsible for collectinginformation about the context of specific statements or patterns that you wish todetect. The hard part is figuring out how to collect all the information you needbecause sometimes it can be spread across nodes of several different types (e.g.more complex code patterns).There is a special decorator called @utils.check_messages. You have to listall message ids that your visit_ or leave_ method will generate!How to select message codes and IDsOne of the most unclear things for me is message codes. pylintdocs sayThe message-id should be a 5-digit number, prefixed with a message category.There are multiple message categories, these being C, W, E, F, R,standing for Convention, Warning, Error, Fatal and Refactoring.The rest of the 5 digits should not conflict with existing checkers and theyshould be consistent across the checker. For instance, the first two digits shouldnot be different across the checker.I'm usually having troubles with the numbering part so you will have to get creativeor look at existing checker codes.Practical exampleIn Kiwi TCMS there's legacy code that looks like this:def add_cases(run_ids, case_ids): trs = TestRun.objects.filter(run_id__in=pre_process_ids(run_ids)) tcs = TestCase.objects.filter(case_id__in=pre_process_ids(case_ids)) for tr in trs.iterator(): for tc in tcs.iterator(): tr.add_case_run(case=tc) returnNotice the dangling return statement at the end! It is useless because when missingthe default return value of this function will still be None. So I've decided tocreate a plugin for that.Armed with the knowledge above I first try the ast parser in the console:Python 3.6.3 (default, Oct 5 2017, 20:27:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linuxType "help", "copyright", "credits" or "license" for more information.>>> import ast>>> import astroid>>> ast.dump(ast.parse('def func():\n return'))"Module(body=[FunctionDef(name='func', args=arguments(args=[], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[Return(value=None)], decorator_list=[], returns=None)])">>> >>> >>> node = astroid.parse('def func():\n return')>>> node>>> node.body[]>>> node.body[0]>>> node.body[0].body[]As you can see there is a FunctionDef node representing the function and it hasa body attribute which is a list of all statements inside the function. The lastelement is .body[-1] and it is of type Return! The Return node also has anattribute called .value which is the return value! The complete code will looklike this:uselessreturn.pyimport astroidfrom pylint import checkersfrom pylint import interfacesfrom pylint.checkers import utilsclass UselessReturnChecker(checkers.BaseChecker): __implements__ = interfaces.IAstroidChecker name = 'useless-return' msgs = { 'R2119': ("Useless return at end of function or method", 'useless-return', 'Emitted when a bare return statement is found at the end of ' 'function or method definition' ), } @utils.check_messages('useless-return') def visit_functiondef(self, node): """ Checks for presence of return statement at the end of a function "return" or "return None" are useless because None is the default return type if they are missing """ # if the function has empty body then return if not node.body: return last = node.body[-1] if isinstance(last, astroid.Return): # e.g. "return" if last.value is None: self.add_message('useless-return', node=node) # e.g. "return None" elif isinstance(last.value, astroid.Const) and (last.value.value is None): self.add_message('useless-return', node=node)def register(linter): """required method to auto register this checker""" linter.register_checker(UselessReturnChecker(linter))Here's how to execute the new plugin:$ PYTHONPATH=./myplugins pylint --load-plugins=uselessreturn tcms/xmlrpc/api/testrun.py | grep useless-returnW: 40, 0: Useless return at end of function or method (useless-return)W:117, 0: Useless return at end of function or method (useless-return)W:242, 0: Useless return at end of function or method (useless-return)W:495, 0: Useless return at end of function or method (useless-return)NOTES:If you contribute this code upstream and pylint releases it you will get a traceback:pylint.exceptions.InvalidMessageError: Message symbol 'useless-return' is already definedthis means your checker has been released in the latest version and you can drop the customplugin!This is example is fairly simple because the AST tree provides the information we need in a very handy way. Take a look at some of my other checkers to get a feeling of what a more complex checker looks like!Write and run tests for your new checkers, especially if contributing upstream. Have in mind that the new checker will be executed against existing code and in combination with other checkers which could lead to some interesting results. I will leave the testing to yourself, all is written in the documentation.This particular example I've contributed asPR #1821 which happened to contradictan existing checker. The update, raising warnings only when there's a single returnstatement in the function body, is PR #1823.Workshop around the cornerI will be working together with HackSoft on an in-houseworkshop/training for writing pylint plugins. I'm also looking at revivingpylint-django so we canwrite more plugins specifically for Django based projects.If you are interested in workshop and training on the topic let me know!Thanks for reading and happy testing! Posted by Alexander Todorov on Fri 05 January 2018 There are comments. On Pytest-django and LiveServerTestCase with initial data While working on Kiwi TCMS I've had the opportunity tolearn in-depth about how the standard test case classes in Django work. Thisis a quick post about creating initial data and order of execution!Initial test data for TransactionTestCase or LiveServerTestCaseclass LiveServerTestCase(TransactionTestCase), as the name suggests, provides a runningDjango instance during testing. We use that for Kiwi's XML-RPC API tests, issuinghttp requests against the live server instance and examining the responses!For testing to work we also need some initial data. There are few key itemsthat need to be taken into account to accomplish that:self._fixture_teardown() - performs ./manage.py flush which deletes all records from the database, including the ones created during initial migrations;self.serialized_rollback - when set to True will serialize initial records from the database into a string and then load this back. Required if subsequent tests need to have access to the records created during migrations!cls.setUpTestData is an attribute of class TestCase(TransactionTestCase) and hence can't be used to create records before any transaction based test case is executed.self._fixture_setup() is where the serialized rollback happens, thus it can be used to create initial data for your tests!In Kiwi TCMS all XML-RPC test classes have serialized_rollback = True andimplement a _fixture_setup() method instead of setUpTestData() to create thenecessary records before testing!NOTE: you can also use fixtures in the above scenario but I don't like using themand we've deleted all fixtures from Kiwi TCMS a long time ago so I didn't feel likegoing back to that!Order of test executionFromDjango's docs:In order to guarantee that all TestCase code starts with a clean database, the Django test runner reorderstests in the following way:All TestCase subclasses are run first.Then, all other Django-based tests (test cases based on SimpleTestCase, including TransactionTestCase) are run with no particular ordering guaranteed nor enforced among them.Then any other unittest.TestCase tests (including doctests) that may alter the database without restoring it to its original state are run.This is not of much concern most of the time but becomes important when you decideto mix and match transaction and non-transaction based tests into one test suite.As seen in Job #471.1tcms/xmlrpc/tests/test_serializer.py tests errored out! If you execute these testsstandalone they all pass! The root cause is that these serializer tests are based onDjango's test.TestCase class and are executed after a test.LiveServerTestCase class!The tests in tcms/xmlrpc/tests/test_product.py will flush the database, removing allrecords, including the ones from initial migrations. Then when test_serializer.py isexecuted it will call its factories which in turn rely on initial records being availableand produces an error because these records have been deleted!The reason for this is that pytest doesn't respect the order of execution for Django tests!As seenin the build log above tests are executed in the order in which they were discovered!My solution was not to use pytest (I don't need it for anything else)!At the moment I'm dealing with strange errors/segmentation faults when running Kiwi's testsunder Django 2.0. It looks like the http response has been closed before the client sidetries to read it. Why this happens I have not been able to figure out yet. Expect anotherblog post when I do.Thanks for reading and happy testing! Posted by Alexander Todorov on Tue 26 December 2017 There are comments. How to configure MTU for the Docker network On one of my Jenkins slaves I've been experiencing problems when downloadingfiles from the network. In particular with cabal update which fetches datafrom hackage.haskell.org. As suggested by David Roble the problem and solutionlies in the MTU configured for the default docker0 interface!By default docker0 had MTU of 1500 which should be lower than thehost eth0 MTU of 1400! To configure this before the docker daemon is startedplace any non-default settings in /etc/docker/daemon.json! For more informationhead tohttps://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/.Thanks for reading and happy testing! Posted by Alexander Todorov on Fri 08 December 2017 There are comments. 4 Situational Leadership Styles At SEETEST this year I visited only tracks related tomanagement and leadership. The presentationHow good leadership makes you a great team player by Jeroen Rosinkwas of particular interest to me. He talked aboutsituational leadership.Image by Penn State UniversityAccording to Hersey & Blanchard each situation/person is different and it requiresa leader or manager to adjust their style in order to be successful. In particularas a leader you have to approach each team, person and skill differently based on howdeveloped they are. In this context a skill can be any technical or non-technicalskill, a particular competence level required or anything really. The idea is thatfor ever item that we would like to develop we would go through the cycle shownon the image above.DirectingEvery new employee, team member, junior IT specialist starts with some directing.This is the phase where you tell people what they have to do and how to do itexactly. This is the phase of the almighty boss who provides the what, how,why, when and where!In this phase an inexperienced(or new) person will figure out what is requiredof them and give them detailed steps of how to achieve it.Experienced team members will quickly find their bearings and transition outof this phase.CoachingIn this phase the individual has already acquired some skills but they are notfully developed. In addition to tasks here we also focus at supporting the individual to improvetheir skills and deepen the connection and trust between them and the leader. Thisis the basis of creating strong commitment in the future.Think about coaches of sport teams. What they do is give direction in order tocreate the best players/teams.SupportingThis phase comes naturally after coaching. Here we can also make the parallel withsport teams. In this phase team members are already competent in their skills butsomewhat inconsistent in their performance and not very committed to theend goal of the team (e.g. winning, testing all bugs, delivering software on time).This is the phase in whichshared decisions are taken (what to test, how we should test, how to split thetasks between team members) and in which teams are formed.Here a leader must focus less on the particular tasks and much more on therelationships within the group (don't forget the leader is also part of the group).DelegatingThis is the end phase in which we have individuals with strong skills and strongcommitment. They are able to work and progress on their own. The job of the leaderhere is to monitor progress and still be part of some decisions. What I've seenpeople who I believe were delegating do is mostlyreaffirm the decisions taken by the team.In this phase there's no need for the leader to focus on tasks and relationshipbut rather high level goals and IMO providing opportunities for growth ofeach individual team member. This is the phase where future leaders will come from.What that means for the team ?Notice the smaller section in the image above titled Development Level!While an individual or a team is going through the different phases ofleadership they also go through various stages of development. At the endof the cycle we get individuals with very strong skills and very strongcommitment and work ethics.What that means for the leader ?(stats from presentation at the conference)54% of leaders can use only 1 style34% of leaders can use 2 styles11% of leaders can use 3 styles1% of leaders can use 4 stylesThis means as leaders we have a lot to learn if we want to become effective.We have to learn to recognize at what stage of development an organization and/ora team is and what are the various stages of development of individual team members.Then apply this model as appropriate.A side note: I am currently working with a group of young developers on an open sourceproject where all of them are pretty much at the beginning of their journey. Theylack almost all necessary technical skills that are needed to work on the projectand their profiles, including age are very similar to one another. I believe this isan ideal situation to apply this model and see how it goes (expect results in a year or so).Note2: I will have 2 more developers joining the same project a bit later and I expectone of them to be able to get up to speed faster (so far I have observed veryimpressive self-development in them) so that will spice things up a bit :)Bonus questionDo you rememberThe 4 Basic Communication Stylespost from last year? I have the feeling that these styles are very much relatedto the leadership strategies described above. For example Director is using the directing style,Expresser sounds a lot like a coach to me and Harmonizer is using the supporting style.Only a Thinker doesn't quite fit but on the other hand they can be quite self-drivenand not need supervision.I don't know if there's something here or I'm totally making things up. I'd love toget some insights from psychologists, leadership experts and communication experts.Further readingHere are a few basic articles to get you startedhttps://online.stu.edu/situational-leadership/https://johnkwhitehead.ca/situational-leadership/http://www.leadership-central.com/situational-leadership-theory.html#axzz4y4I4vrZ7Thanks for reading and happy testing (your leaderhip skills)! Posted by Alexander Todorov on Sat 11 November 2017 There are comments. Fallback to default values for NULL columns in Rust SQLite I have been working on code which changed its DB schema to add a NULL columnwithout a default value! The standard row.get() from Rusqlite throws errorsbecause NULL is not a valid integer value.The solution is to use row.get_checked() like so:let build_id = row.get_checked(3).unwrap_or(0);Interestingly enough I wasn't able to find clear information about this on theInternet so here it is.Thanks for reading and happy hacking! Posted by Alexander Todorov on Fri 27 October 2017 There are comments. The ARCS model of motivational design The ARCS model is an instructional design method developed by John Kellerthat focuses on motivation. ARCS is based on a research into best practicesand successful teachers and gives you tactics on how to evaluate yourlessons in order to build motivation right into them.I have conducted and oversaw quite a few trainings and I have not been impressedwith the success rate of those so this topic is very dear to me.Success for me measures in the ability to complete the training andlearn the basis of a technical topic. And then gather the initialmomentum to continue developing your skills within the chosen field.This is what I've been doing for myself and this is what I'd like tosee my students do.In his paper (I have a year 2000 printed copy from Cuba)Keller argues that motivation is a product of four factors:Attention, Relevance, Confidence and Satisfaction. You need all of themincorporated in your lessons and learning materials for them to be motivational.I could argue that you need the same characteristics at work in order tomotivate people to do their job as you wish.Once you start a lesson you need to grab the audience Attention so theycan listen to you. Then the topic needs to be relevant to the audienceso they will continue listening to the end. This makes for a good startbut is not enough. Confidence means for the audience to feel confidentthey can perform all the necessary tasks on their own, that they havewhat it takes to learn (and you have to build that). If they think theycan't make it from the start then it is a lost battle. And Satisfactionmeans the person feels that achievements are due to their own abilities andhard work not due to external factors (work not demanding enough, luck, etc).If all of the above 4 factors are true then the audience should feelpersonally motivated to learn because they can clearly understand thebenefit for themselves and they realize that everything depends on them.ARCS gives you a model to evaluate your target audience and lesson propertiesand figure out tactics by which to address any shortcomings in the above 4 areas.Last Friday I hosted 2 training sessions: a Python and Selenium workshopat HackConf and then a lecture about test case management and demo ofKiwi TCMS before students at Pragmatic IT academy.For both of them I used the simplified ARCS evaluation matrix.In this matrix the columns map to the ARCS areas while the rows map todifferent parts of the lesson: audience, presentation media, exercise, etc.Here's how I used them (I've mostly analyzed the audience).Python & Selenium workshopAttention(+) this is an elective workshop(+) the topic is clear and the curricula is on GitHub(+) the title is catchy (Learn Python & Selenium in 6 hours)(+) I am well known in the industryRelevance(+) Basic Python practical skills, being able to write small programs, knowing the basic building blocks(+) Basic Selenium skills: finding and using elements(+) Basic Python test automation skills: writing simple tests and assertsConfidence(+) each task has tests which need to report PASS at the end(-) need to use PyCharm IDE, unfamiliar with IDEs(-) not enough experience with programming or Linux(-) not enough experience with (automation) testing(-) all materials and exercises are in EnglishSatisfaction(-) not being able to create a simple programFrom the above it was clear that I didn't need to spend much time on buildingattention or relevance. The topic itself and the fact that these are skill whichcan be immediately applied at work gave the workshop a huge boost. During theopening part of my workshop I've stated "this training takes around 2 months,I've seen some of you forking my GitHub repo so I know you are prepared. Let'ssee how much you can do in 6 hours" which sets the challenge and was my attentionbuilding moment. Then I reiterated that all skills are directly applicable indaily work confirming the relevance part.I did need a confidence building strategy though. So having all the tests readymeant evaluation was quick and easy. Anton (my assistant) and I promised to helpwith the IDE and all other questions to counter the other items on the list.During the course of the workshop I did quick code review of all participantsthat managed to complete their tasks within the hour giving them quick tips onhow to perform or highlighting pieces of code/approaches that were differentfrom mine or that I found elegant or interesting. This was my confidence buildingstrategy. Code review and verbal praising also touches on the satisfactionarea, i.e. the participant gets the feeling they are doing well.My Satisfaction building strategy was kind of mixed. Before I read about ARCSI wanted to give penalty points to participants who didn't complete on time and thensend them home after 3 fails. At the end I only said I will do this but didn'tdo it.Instead I used the challenge statement from the attention phase andturned that into a competition. The first 3 participants to complete their module tasks on timewere rewarded chocolates. With the agreement of the entire group the grand prizewas set to be a small box of the same chocolates and this would be awarded tothe person with the most chocolates (e.g. the one who's been in top 3 the most times).Bistra is our winner. 4/5 times in top 3 #Python #Selenium #testing #HC17 pic.twitter.com/vXrPhElbbW— Alexander Todorov (@atodorov_) September 29, 2017I don't know if ARCS had anything to do with it but this workshopwas the most successful training I've ever done. 40% of the participantsmanaged to get at least one chocolate and at least 50% have completed all oftheir tasks within the hour. Normally a passing rate on such training isaround 10 to 20 %.During the workshop we had 5 different modules which consisted of 10-15 minutesexplanation of Python basics (e.g. loops or if conditions), quick Q&A sessionand around 30 minutes for working alone and code review. I don't think I was followingARCS for each of the separate modules because I didn't have time to analyze themindividually. I gambled all my money on the introductory 10 minutes!TCMS lectureMy second lecture for the day was about test case management. The audience wasstudents who are aspiring to become software testers and attending theSoftware Testing training at Pragmatic. In my lecture (around 1 hour) I wantedto explain what test management is, why it is important and also demo thetool I'm working on - Kiwi TCMS. The analysis looks like:Attention(+) the entire training was elective but(-) that particular lecture was mandatory. Students were not able to select what they are going to studyRelevance(-) it may not be clear what TCMS is and why we need it(+) however students may sense that this is something work related since the entire training isConfidence(-) unknown UI, generally unfamiliar workflow(-) not enough knowledge how to write a Test Plan document or test casesSatisfaction(-) how to make sure new skills can be applied in practiceSo I was in a medium need of a strategy to build attention. My opening was by introducingmyself to establish my professional level and introducing Kiwi TCMSby saying it is the best open source test case management system to which I'm one of thecore maintainers.Then I had a medium need of a relevance building strategy. I did this by explaining whattest management is and why it is important. I've talked briefly about QA managers trying toindirectly inspire the audience to aim for this position. I finished this part by tellingthe students how a TCMS system helps the ordinary guy in their daily work - namely bygiving you a dashboard where you can monitor all the work you need to do, check yourprogress, etc.I was in a strong need to build confidence. I did a 20-30 minutes demonstration whereI was writing a Test Plan and test cases and then pretending to execute them and marking bugsand test results in the system. I told the students "you are my boss for today, tell me whatI need to test". So they instructed me to test the login functionality of the systemand we agreed on 5 different test cases. I described all of these into Kiwi TCMS and beganexecuting them. During execution I opened another browser window and did exactly what thetest case steps were asking for. There were some bugs so I promptly marked them as such andI promised I will fix them.To build satisfaction I was planning on having the students write one test plan and sometest cases but we didn't have time for this. Their instructor promised they will be doingmore exercises and using Kiwi TCMS in the next 2 months but this remains to be seen.I've wrapped my lecture by giving advise to use Kiwi TCMS as a portfolio building tool.Since these students are newcomers to the QA industry their next priority will be lookingfor a job. I've advised them to document their test plans and test cases into Kiwi TCMSand then present these artifacts to future employers.I've also told them they are more than welcome to test and report bugs against Kiwi TCMSon GitHub and add these bugs to their portfolio!This is how I've applied ARCS for the first time. I like it and will continue to use it formy trainings and workshops. I will try harder to make the application process more iterativeand apply the method not only to my opening speech but for all submodules as well!One thing that bothers me is can I apply the ARCS principles when doing a technicalpresentation and how do they play together or clash with storytelling, communication style andrhetoric (all topics I'm exploring for my public speaking). If you do have more experiencewith these please share it in the comments below.Thanks for reading and happy testing! Posted by Alexander Todorov on Thu 05 October 2017 There are comments. Storytelling for test professionals This is a very condensed brief of an 8 hour workshop I visitedearlier this year held by Huib Schoots. You can find theslides here.Storytelling is the form in which people naturally communicate.Understanding the building blocks of a story will help usunderstand other people's motivations, serve as map for actions and emotions,help uncover unknown perspectives and serve as source for inspiration.Stories stand on their own and have a beginning, middle and an end. There is amain character and a storyline with development. Stories are authentic andpersonal and often provocative and evoke emotions.7 basic story plotsOvercoming the MonsterRags to RichesThe QuestVoyage and returnComedyTragedyRebirthFrom these we can derive the followingtypes of stories.6 types of storiesWho am I (identity stories)Why am I here (motive and mission stories)Vision stories (the big picture)Future scenarios (imagining the future)Product stories (branding)Culture stories (a sum of other stories)12 Common ArchetypesEach story needs a hero and there are12 common archetypesof heroes. More importantly you can also find these archetypes within your team andorganization. Read the link above to find out what their motto, core desire, goals,fears and motives are. The 12 types areInnocentEverymanHeroCaregiverExplorerRebelLoverCreatorJesterSageMagicianRuler6 key elements of a storyWho's the hero?What is their desire?What is stopping them?What is the turning point?What are their insights?What is the solution?Dramatic structure and Freytag's pyramidOne of the most commonly used storytelling structures is the Freytag's Pyramid.According to it each story has an exposition, rising action, climax, falling actionand resolution. I think this can be applied directly when preparing presentationseven technical ones.The Hero's journeySuccessful stories follow the 12 steps of the hero's journeyOrdinary worldCall to adventureRefusal of the CallMeeting the mentorCrossing the threshold (after which the hero enters the Special world)Tests, allies and enemiesApproachOrdeal, death & rebirthRewards, seizing the swordThe road back (to the ordinary world)ResurrectionReturn with elixirAs part of the workshop we worked in groups and created a completely made upstory. Every person in the group was contributing couple of sentences fromtheir own experiences, trying to describe the particular step in the hero's journey.At the end we told a story from the point of view of a single hero which wasa complete mash-up of moments that had nothing to do with each other. Still itsounded very realistic and plausible.Storytelling techniquesSUCCESSmeans Simple, Unexpected, Concrete, Credible, Emotional, Stories. To use thistechnique find the core of your idea, grab people's attention bysurprising them and make sure the idea can be understood and remembered later.Find a way to make people believe in the idea so they can test it for themselves,make them feel something to understand why this idea is important. Tell stories andempower people to use an idea through narrative.STAR meansSomething They will Always Remember. A STAR Moment should beSimple, Transferable, Audience-centered, Repeatable, and Meaningful.There are5 types of STAR moments:memorable dramatization, repeatable sound bites, evocative visuals,emotive storytelling, shocking statistics.To enhance our stories and presentations we should appeal to senses(smell, sounds, sight, touch, taste) and make it visual.I will be using some of these techniques combined with others in my futurepresentations and workshops. I'd love to be able to summarize all of theminto a short guide targeted at IT professionals but I don't know if thisis even possible.Anyway if you do try some of these techniques in your public speaking pleaselet me know how it goes. I want to hear what works for you and your audienceand what doesn't.Thanks for reading and happy testing! Posted by Alexander Todorov on Tue 03 October 2017 There are comments. More tests for login forms By now I probably have documented more test cases for login forms than anyoneelse. You can check out my previous posts on the topichere andhere. I give you a few moreexamples.Test 01 and 02:First of all let's start by saying that a "Remember me" checkbox should actuallyremember the user and login them automatically on the next visit if checked. Theother way around if not checked. I don't think this has been mentioned previously!Test 03:When there is a "Remember me" checkbox it should be selectable both with the mouseand the keyboard. On my.telenor.bg the checkbox changes its image only whenclicked with the mouse. Also clicking the login button with Space doesn't work!Interestingly enough when I don't select "Remember me" at all and close thenrevisit the page I am still able to access the internal pages of my account!At this point I'm not quite sure what this checkbox does!Test 04:Testing two factor authentication. I had the case where GitHub SMS didn'tarrive for over 24 hrs and I wasn't able to login. After requesting a new codeyou can see the UI updating but I didn't receive another message. In this particularcase I received only one message with an already invalid code. So test for:how long does it take for the codes to expireis there a visual feedback indicating how many codes have been requesteddo latest code invalidates all the previous ones or all that have been unused still workwhat happens if I'm already logged in and somebody tries to access my account requesting additional codes which may or may not invalidate my login session?Test 05:Check that confirmation codes, links, etc will actually expire after theirconfigured time. Kiwi TCMS had this problem which has been fixed inversion 3.32.Test 06:Is this a social-network-login only site? Then which of my profiles did I use?Check that there is a workingsocial auth provider reminder.Test 07:Check that there is an error message visible (e.g. wrong login credentials).After the redesign Kiwi TCMS had stopped displaying this message and insteadpresents the user with the login form again!Also checkout thesetesting challengesby Claudiu Draghia where you can see many cases related to input fieldvalidation! For example empty field, value too long, special characters in field, etc.All of these can lead to issues depending on how login is implemented.Thanks for reading and happy testing! Posted by Alexander Todorov on Mon 02 October 2017 There are comments. Xiaomi's selfie bug Recently I've been exploring the user interface of a Xiaomi Redmi Note 4Xphone and noticed a peculiar bug, adding to my collection ofobscure phone bugs.Sometimes when taking selfies the imageswill not be saved in the correct orientation. Instead they will be saved asif looking in the mirror and this is a bug!While taking the selfie the display correctly acts as a mirror, see my personalSamsung S5 (black) and the Xiaomi device (white).However when the image is saved and then viewed through the gallery applicationthere is a difference. The image below is taken with the Xiaomi device and therehave been no effects added to it except scaling and cropping. As you can seethe letters on the cereal box are mirrored!The symptoms of the bug are not quite clear as of yet. I've managed to reproduce ataround 50% rate so far. I've tried taking pictures during the day in direct sunlightand in the shade, also in the evening under bad artificial lighting.Taking photo of a child's face and then child plus varying number of adults.Then photo of only 1 or more adults, heck I even made a picture of myself. I though thatlighting or the number of faces and their age have something to do with this bugbut so far I'm not getting consistent results. Sometimes the images turn out OKand other times they don't regardless of what I take a picture of.I also took a picture of the same cereal box, under the same conditions as above butnot capturing the child's face and the image came out not mirrored. The only cluethat seems to hold true so far is that you need to have people's faces in the picturefor this bug to reproduce but that isn't an edge case when taking selfies, right?I've also compared the results with my Samsung S5 (Android version 6.0.1) and BlackBerry Z10 devicesand both work as expected: while taking the picture the display acts as a mirrorbut when viewing the saved image it appears in normal orientation. On S5 there isalso a clearly visible "Processing" progress bar while the picture is being saved!For reference the system information is below:Model number: Redmi Note 4XAndroid version: 6.0 MRA58KAndroid security patch level: 2017-03-01Kernel version: 3.18.22+I'd love if somebodyfrom Xiaomi's engineering department looks into this and sends me a root cause analysisof the problem.Thanks for reading and happy testing! Oh and btw this is my breakfast, not hers! Posted by Alexander Todorov on Fri 08 September 2017 There are comments. Speeding up Rust builds inside Docker Currently it is not possibleto instruct cargo, the Rust package manager, to build only the dependenciesof the software you are compiling! This means you can't easily pre-installbuild dependencies. Luckily you can workaround this with cargo build -p!I've been using this Python script to parse Cargo.toml:parse-cargo-toml.py#!/usr/bin/env pythonfrom __future__ import print_functionimport osimport toml_pwd = os.path.dirname(os.path.abspath(__file__))cargo = toml.loads(open(os.path.join(_pwd, 'Cargo.toml'), 'r').read())for section in ['dependencies', 'dev-dependencies']: for dep, version in cargo[section].items(): print('cargo build -p %s' % dep)and then inside my Dockerfile:RUN mkdir /bdcs-api-rs/COPY parse-cargo-toml.py /bdcs-api-rs/# Manually install cargo dependencies before building# so we can have a reusable intermediate container.# This workaround is needed until cargo can do this by itself:# https://github.com/rust-lang/cargo/issues/2644# https://github.com/rust-lang/cargo/pull/3567COPY Cargo.toml /bdcs-api-rs/WORKDIR /bdcs-api-rs/RUN python ./parse-cargo-toml.py | while read cmd; do \ $cmd; \ doneIt doesn't take into account the version constraints specified in Cargo.toml butis still able to produce an intermediate docker layer which I can use tospeed-up my tests by caching the dependency compilation part.As seen in the build log,lines 1173-1182, when doing cargo build it downloads and compiles chrono v0.3.0 andtoml v0.3.2. The rest of the dependencies are already available. The logs also showthat after Job #285 the build times dropped from 16 minutes down to 3-4 minutes due toDocker caching. This would be even less if the cache is kept locally!Thanks for reading and happy testing! Posted by Alexander Todorov on Wed 30 August 2017 There are comments. Code coverage from Nightmare.js tests In this article I'm going to walk you through the steps requiredto collect code coverage when running an end-to-end test suiteagainst a React.js application.The application under test looks like this It is served as an index.html file and a main.js file which interceptsall interactions from the user and sends requests to the backend API whenneeded.There is an existing unit-test suite which loads the individual componentsand tests them in isolation.Apparently people do this!There is also an end-to-end test suite which does the majority of the testing.It fires up a browser instance and interacts with the application. Everythingruns inside Docker containers providing a full-blown production-like environment.They look like thistest('should switch to Edit Recipe page - recipe creation success', (done) => { const nightmare = new Nightmare(); nightmare .goto(recipesPage.url) .wait(recipesPage.btnCreateRecipe) .click(recipesPage.btnCreateRecipe) .wait(page => document.querySelector(page.dialogRootElement).style.display === 'block' , createRecipePage) .insert(createRecipePage.inputName, createRecipePage.varRecName) .insert(createRecipePage.inputDescription, createRecipePage.varRecDesc) .click(createRecipePage.btnSave) .wait(editRecipePage.componentListItemRootElement) .exists(editRecipePage.componentListItemRootElement) .end() // remove this! .then((element) => { expect(element).toBe(true); // here goes coverage collection helper done(); // remove this! });}, timeout);The browser interaction is handled by Nightmare.js (sort of like Selenium) andthe test runner is Jest.Code instrumentationThe first thing we need is to instrument the application code to provide coveragestatistics. This is done via babel-plugin-istanbul. Because unit-tests areexecuted a bit differently we want to enable conditional instrumentation. In realityfor unit tests we use jest --coverage which enables istanbul on the fly and havingthe code already instrumented breaks this. So I have the following in webpack.config.jsif (process.argv.includes('--with-coverage')) { babelConfig.plugins.push('istanbul');}and then build my application with node run build --with-coverage.You can execute node run start --with-coverage, open the JavaScript consolein your browser and inspect the window.__coverage__ variable. If this is definedthen the application is instrumented correctly.Fetching coverage information from within the testsRemember that main.js from the beginning of this post? It lives inside index.htmlwhich means everything gets downloaded to the client side and executed there.When running the end-to-end test suite that is the browser instance which is controlledvia Nightmare. You have to pass window.__coverage__ from the browser scope back tonodejs scope via nightmare.evaluate()! I opted to directly save the coverage dataon the file system and make it available to coverage reporting tools later!My coverage collecting snippet looks like thisnightmare .evaluate(() => window.__coverage__) // this executes in browser scope .end() // terminate the Electron (browser) process .then((cov) => { // this executes in Node scope // handle the data passed back to us from browser scope const strCoverage = JSON.stringify(cov); const hash = require('crypto').createHmac('sha256', '') .update(strCoverage) .digest('hex'); const fileName = `/tmp/coverage-${hash}.json`; require('fs').writeFileSync(fileName, strCoverage); done(); // the callback from the test }).catch(err => console.log(err));Nightmare returns window.__coverage__ from browser scope back to nodejs scopeand we save it under /tmp using a hash value of the coverage data as the filename.Side note: I do have about 40% less coverage files than number of test cases.This means some test scenarios exercise the same code paths. Storing the individualcoverage reports under a hashed file name makes this very easy to see!Note that in my coverage handling code I also call .end() which will terminatethe browser instance and also execute the done() callback which is being passedas parameter to the test above! This is important because it means we had to updatethe way tests were written. In particular the Nightmare method sequence doesn'thave to call .end() and done() except in the coverage handling code. Thecoverage helper must be the last code executed inside the body of the last.then() method. This is usually after all assertions (expectations) have been met!Now this coverage helper needs to be part of every single test case so Iwanted it to be a one line function, easy to copy&paste! All my attempts tomove this code inside a module have been futile. I can get the module loadedbut it kept failing withUnhandled promise rejection (rejection id: 1): cov_23rlop1885 is not defined;`At the end I've resorted to this simple hackeval(fs.readFileSync('utils/coverage.js').toString());Shout-out to Krasimir Tsonev who joined me on a twodays pairing session to figure this stuff out. Too bad we couldn't quite figure itout. If you do please send me a pull request!Reporting the resultsAll of these coverage-*.json files are directly consumable by nyc - thecoverage reporting tool that comes with the Istanbul suite! I mounted.nyc_output/ directly under /tmp inside my Docker container so I couldnyc reportnyc report --reporter=lcov | codecovWe can also modify the unit-test command tojest --coverage --coverageReporters json --coverageDirectory .nyc_output so itproduces a coverage-final.json file for nyc. Use this if you want to combinethe coverage reports from both test suites.Because I'm using Travis CI the two test suites are executed independently andthere is no easy way to share information between them. Instead I've switchedfrom Coveralls to CodeCov which is smart enough to merge coverage submissionscoming from multiple jobs on the same git commits. You can compare the commitsubmitting only unit-test resultswith the onesubmitting coverage from both test suites.All of the above steps are put into practice inPR #136 if you want to check them out!Thanks for reading and happy testing! Posted by Alexander Todorov on Sat 12 August 2017 There are comments. Older Posts → Page 1 / 16 I am the project lead for Kiwi TCMS and the current maintainer of pylint-django! Some of the links contained within this site have my referral id (e.g., Amazon), which provides me with a small commission for each sale. Thank you for your support. CC-BY-SA & MIT 2011-2018 ♦ Alexander Todorov