Showing posts with label DotNet. Show all posts
Showing posts with label DotNet. Show all posts

Monday, November 9, 2015

New Adventures in Software Testing

I recently described my past years in testing, but this post is more about the future, using data to back it up.  However, we can only look into the future using the mirror of of the past.  In writing my last 70 or so articles, I have gather a good number of thoughts, and I have posted on average 1-2 in depth posts a month.  In using blogger's data, I had assumed that I was reaching an audience of substance.  However, in investigating my actual reach, it is substantially smaller than I thought.  I have two purposes for writing.  One is to collect my own thoughts, the other is to document my ideas for others to use.  Until today.

I am backing away from my editorial adventures for the time being in order to focus on another adventure.  But wait, don't leave yet.  You've not even asked what it is.  Well, here is the last article I plan to write in the near future.

Often times, I have been asked what sort of code someone should write when they want to learn to write code.  I often come back with the reply, well what sorts of problems do you want to solve?  Why write code at all?  If you have no vision for what to write, being motivated to write code can be incredibly hard.  Even if the answer is I want to write web applications to get a job, that at least provides a hint into the motivations and what sorts of study you need.  Some people have no focus at all and just want to do "something cool".  I have found these people rarely have the interest in completing projects, as cool moves so fast.

In learning to code, I started out with games, like most other people my age.  I built up a set of programs over the years, some of which I spent years developing.  Often times, these programs were designed for a particular need I experienced.  Like my advice, I work on what I find I need rather than what is cool.  Several years ago I became a land lord and found I needed a way to track my write offs.  I also had a job interview that involved PHP, so I wrote a simple tool to track write offs using PHP and Postgres.  I didn't get the job.  My PHP application was clunky and required me to record all the data by hand, yet my bank knew about these transactions.  Why couldn't I just automate the task?  I didn't enjoy working in PHP so I moved to .net and started building a windows application.

I worked for about 3 weeks and had a rough framework designed, primarily using reflections and a Excel-like control.  I started to refine it and within a month I had an application I was happy with.  I kept refining it over a few years, but it was always just meant for me.  I fixed big bugs, but it was meant to please me, so I didn't worry too much about it.

Then, just this last year, my boss, Isaac asked me a question.  He asked me if I was ever going to sell my project.  I thought a while around this and decided it would be a interesting effort and a good learning experience.  I have always wanted to go to work for myself, so why not?  I started in on my adventure.  My adventure included multiple iterations and efforts.  Many of these efforts have been running parallel over the last year.

What Features Do I Need?


I knew what I needed.  I needed one serious feature, the ability to mark items as write offs.  No one else except for the massive applications supported that sort of feature.  These mazes of confusion and madness were more complex than I wanted.  Most of the small open source applications either didn't support the feature or were equally complicated.  Furthermore, the thing I personally needed only a little is one of the big words in personal financing, budgeting.  So, like the heuristic of comparable products, I started examining other products.  I worked on developing features and creating test maps around what would need testing.  But more on that later.

This is a more complex question than you might otherwise think.  For example, what does your EULA look like?  How do you authenticate your key?  What other non-functional requirements am I missing?  I developed a large list of requirements and had to build them all out.

How Do I Start a Business?


I had to start investigating how to license software.  Even the word license is tricky, as I don't sell software, I license its usage.  That affects taxes.  You have to gather all sorts of documentation for the government to allow you to start a business.  Then there is the logo.  What about the store front?  The list goes on and on.  On the good side, I have no employees, so I that simplifies a great deal.

Marketing?


Let me be frank, to the few people who read this regularly.  I'm not good at marketing.  I choose privacy over announcing my existence over and over again.  To make even a personal blog big, it appears one must do lots of advertising.  Isaac suggested it takes 2 years of effort to maybe hit it big, ignoring Black Swans.  I still don't have a personal twitter handle, although I do have one for business to my chagrin.  This will likely be the hardest part for me.  I have a few tweets queued up, and several blog posts about personal finance written.  Hopefully my efforts will pay off, but one never knows.

I also am starting to learn about Google Ads and Bing Ads.  In reality, things are way more complicated than you might imagine.  Google's Ad system is crazy-complicated.  Campaigns connect to Ads which tie to keywords or to a screen scrapper Google uses.  Google Ads use a complicated bidding system where the winner doesn't pay what they bid but the max the last guy bid.  The more I learn the more I learn I know nothing.

Documentation?


I grabbed a free for a year AWS instance of a wiki system and started writing.  I kept writing and writing.  I have written documentation for several open source systems and have done some fairly extensive documentation in the past.  In this case, I tried tackling the problem from several different ways.  The first way I tried was to write out different features of the system.  Then I tried writing things from a different tact.  I wrote about how you can accomplish tasks.

Testing?


As I built out new features, I discovered how complicated my code base started to get.  I had never done any serious regression on my work since it was built just for me.  I had added features I thought might be useful, but it was still poorly tested.  So I built up a list of all the features and major feature interactions and kept a copy in google docs.  Once my program was in a mostly stable state I started rigorously testing it.  I wish I could claim I didn't find any bugs, but that would not be true.  Instead, I found more than two dozen bugs.  Many of them were relatively minor, others were usability.  My list of needed features grew even larger.  I spent 10-20 hours a week for several weeks testing while I spent my normal time on my day job.

One interesting thing I want to note to all my fellow testers is how hard it is to do both the software development and testing.  While I certainly did testing, I found giving some time between developing the code and doing in depth testing gave me some time to forget exactly how I coded something.  That way, my bias in expectations would be gone, allowing me to note usability issues much more clearly.  I also noticed how it was hard to remain motivated.  Testing for 40 hours a week and then going home and doing more of the same is a challenge.  If you're on a death march, imagine how unmotivated a developer is to do any testing of their software.

Shipping is a Feature


Ultimately, I got my application to a stable state.   While I do still occasionally find a bug, I have not found any big bugs in weeks.  I am still doing occasional testing in that I use it for myself.  I do find value in some of the new features I have added in the past 12 months, so I spend more time in my application than I use to.  It's still fairly trim compared to many of my competitors, but I think that is a feature.  Simpler system often have their own sort of value.  I think it is ready for release.  So, on Oct 31st, I released my program, Money Pig, into the world.




I'm also stepping away from our blog for a while, while I work on Money Pig.  I shall return!

- JCD

Thursday, January 8, 2015

White Papers and Who Owns Your Education?

Let me start out by point you to this rather complex white paper, A Large Scale Study of Programming Languages and Code Quality in Github.  It inspired me to write a little about reading white papers and why they matter.  In trying to read or perhaps decipher, it made me feel a little stupid.  What, never heard of the Chi-Square test?  How about Cramér's-V?  Does that make you feel like, "Then why are you reading this?  Don't you feel dumb?  You should go back to yahoo news or wherever you came from."  That is the voice in the back of my head, and yes, reading these sorts of papers certainly can make me feel dumb.  Fortunately, I know better.  Even if I am not a accomplished statistician, I'm sure there could be bits of data that are worth gathering.  I personally believe we should try to improve our development* in whatever ways possible, and this study is trying to do so by providing a very high level view of computer languages.  Even though it is not particularly about finding bugs or testing, it does give some insights into bugs and their relationship to languages.
* Pun intended

Let me start with a word of warning before we deep dive into the analysis of this whitepaper.
"One should not overestimate the impact of language on defects.  While these relationships [in the chart in the study] are statistically significant, the effects are quite small.... activity [commits] in a project accounts for the majority of explained deviance [difference in bug rates]... the next closest predictor which accounts for less than one percent of the total deviance, is language."
So while language is important, code churn is more important.  That however does not end the story.  There are still important pieces to pickup.  I am going to leave the definitions of what constitutes a particular category, such as a 'functional language' to the actual paper, which you can look up if it is of interest.  Perhaps you know the language you are working with, so considering a list of languages might be of interest.
"...a greater number of defect fixes... include C, C++, JavaScript, Objective-C, Php, Python. ... Closure, Haskell, Ruby, Scala, and TypeScript... are less likely than average to result in defect fixing commits."
For most companies this is bad news.  Few people write Scala or Haskell on a day to day basis.  Worse yet, I wonder if the experience level of those who write in such languages is greater than average, something the study could not control for.  From what I could tell, they didn't even consider looking at the length of time a user's name was associated to a repository as a proxy for experience.  Since experience was not considered, perhaps the broader categories such as 'functional languages' is more helpful.
"...strongly typed languages are less error prone than average while weakly typed languages are more error prone than average."
"This is strong evidence that functional static languages are less error prone than functional dynamic languages..."
"Functional languages have a smaller relationship to defects than either procedural or scripting languages."
So if you are using a strongly typed language you are better off and if you are using a functional language you are better off.  Does it matter what sort of application you are working on?
"...there is only a weak relationship between domain and language class."
Oh, well I guess then it doesn't really matter what you do, it is how you use the language.  Are there classes of defects that matter?
"Languages matters more for specific categories than it does for defects overall."
"In fact, we noticed 28.89% of all the memory errors in Java come from a memory leak.  In terms of effect size, language has a larger impact on memory defects than all other cause categories."
Interestingly, even though Java has managed memory, it seemed to suffer more from memory problems than most other memory managed languages.  This could be because of the applications considered, such as ellasticsearch, a Java based search engine compared to SparkleShare, a C# file sync tool.  I couldn't find anything language related that seemed obvious for Java's problems with memory compared to C#.  On the other hand Java is way better than any unmanaged memory languages.
"Go has a lot more concurrency bugs related to race conditions due to its race condition detect tool..."
"TypeScript is intended to be used as a static, strongly typed language... However in practice, we noticed developers often [50%]... use any type, a catch-all union type, thus makes TypeScript dynamic and weak."
In looking at what may be wrong with this study, I found it interesting that tool support maybe a reason for differences between languages.  Basically, because the language is only as good as the people who use it and the tools around it, it could be the study gives a false sense of knowledge.  It could be that the use cases are so complex, that this one study and method will not get us much in regards to languages and when to use them.  To be fair, the researchers noted other studies, with their own limitations and similarities.  Confirmatory studies are useful, but humanity is really complex and their study wasn't completely confirmatory.  This is a sort of social-science meets well recorded data.  Git has perfectly recorded data, but even with it, it is hard to know because there is so much data.  There was a 'study' done a few years ago regarding swearing and checkins in github.  While amusing, it also is one more piece of data.  Keeping in mind the warning I started with at the beginning of the article, this was the authors' conclusions
"The data indicates functional languages are better than procedural languages; it suggests that strong typing is better than weak typing; that static typing is better than dynamic; and that managed memory usage is better than unmanaged. Further, that the defect proneness of languages in general is not associated with software domains. Also, languages are more related to individual bug categories than bugs overall."
For the average tester, is there something to learn here?  I think the primary thing is that code churn has more to do with defects than anything else.  Fighting over the tool to use after a project is started is probably not worth it, unless you are rebuilding from scratch.  Keep in mind what else comes along with the language and what value that gives you.  If race conditions matter, then Go is likely a winner.  Just because you have managed memory does not mean you can't have memory issues.  Functional languages like SQL tend to have less defects, possibly because state is less of an issue.  Finally, no matter what language you are using and no matter what you are doing, you will have bugs.  What else did you pick up from the white paper?

Wait, wait... don't drop the curtains yet.

In violation of the end an article with a question to cause action, I want to give pause in considering that perhaps this subject is not to your liking.  Well why not read up on all the data browsers leak out.  It will give you a good feel for just how complicated the simple underlying HTTP connections are.  Maybe you want to learn how to model learning, in which case the Dreyfus Model of Skill Acquisition might be up your alley.  Or perhaps you want something more on testing, such as the pair wise testing white paper.  Or perhaps the famous session based test management paper.  Perhaps none of the ideas I've suggested have struck your fancy.  Cem Kaner developed hundreds of links in his citations for one of his classes, many of which are white papers.  And that is to say nothing of rfcs, a Request for Comments, which helps define standards we all use.  Even books like The Reflective Practitioner, which I am slowly reading through, but has been well analyzed in Wikipedia.

Which brings me to the problem with Wikipedia and blogs like mine that analyze content.  People like myself pull out the conclusions for you and so you skip reading the white paper.  Do you really want to leave your entire education up to those who form conclusions for you?  Let's me be honest, most of you won't click half the links I provided and just as many won't seriously read the cited white paper.  It is not unusual for a large percentage of people not even to read this far, complaining "TL/DR".  "So what?", you, the wise reader who has made it this far might ask.  In my book review of An Introduction to General System’s Thinking I tried to pull out some of my favorite little pieces, some flavour, which was not directly related to the review.  Some of Gerald Weinberg's wisdom will be lost if all you do is read a short summary of his ideas.  Now you say you don't have time, wiki is better than nothing.  Maybe that is true.  I won't argue that you shouldn't use wiki and speed through some of one's education.  Just make sure to occasionally spend time looking at a subject deeply to help deepen your thinking and understanding.  I am sure you might come up with more reasons why you shouldn't, and I won't try to out reason you in your quest to not read deeply.  It is your choice.  I will say it might be hard to do that sort of deep reading, but it will be worth it.

So once again, we are back to the cheesy, end with a question.  Well I won't burden you with that sort of guilt-driven work harder question at the end of this post.  Instead I will give you an offer and a statement.  If anyone leaves a comment with a white paper or rfc for me to read, I will read it and I might even report back on it.  Book suggestions are a dime a dozen and sadly even I have finite amounts of patience for books and a stack of books I want to read, but if you give a recommendation I will at least seriously consider it.  And now for the statement:  I want to gain a deeper understanding of the world, and I will share some of what I learn, but you must do your own investigating as well and I want to say...  Good luck with your own education!

Thursday, November 7, 2013

@Testing with [Reflections] Part II

If you haven't already, I suggest you read about reflections before reading too deeply into the topic of annotations.  In case I failed to convince you or in case it didn't totally sink in, reflections as I see it is a way for code to 'think' about code.  In this case we are only considering the "reflection" piece and not the "eval" piece that I spoke about previously.  Reflections supports some pretty fascinating ideas and can be used in multiple different areas with different ways of doing so.  Annotations (Java) or Attributes (C#) in comparison would be code having data about code which works hand in hand with reflections.

xUnit

Lets start with one of the most common examples people in test would be exposed to:

[Test] //Java: @Test()
public void TestSomeFeature() { /*... */}

First to be clear, the attribute (or commented out annotation) is on the first line.  For clarities sake, for the rest of the article I'm going to say "annotation" to mean either attribute or annotation. It defines that the method is in fact a "Test" which can be run by xUnit (NUnit, JUnit, TestNG).

The annotation does not in fact change anything to do with the method.  In fact, in theory xUnit could run ANY and ALL methods, including private methods in an entire binary blob (jar, dll, etc.) using reflections.  In order to filter out the framework and helper methods, they use annotations, requiring them to be attached so they know which methods will run.  This by itself is a very useful feature.

The Past Anew

Looking back at my example from my previous post, you can now see a few subtle changes which I have marked NEW:


class HighScore {
 String playerName;
 int score;
 @Exclude()//NEW
 Date created;
 int placement;
 String gameName;
 @CreateWith(Class = LevelNamer.class)//NEW
 String levelName;
 //...Who knows what else might belong here.
}

Again, the processing function could be subtly modified to support this change. The change is again marked NEW:

function testVariableSetup(Object class) {
for each variable in class.Variables {
 if(variable.containsAnnotation(Exclude.class)) //NEW
  continue;//NEW don't process the variable.
 if(variable.containsAnnotation(CreateWith.class)) { //NEW
  variable.value = CreateNewInstance(variable.getAnnotation(CreateWith).Class).Value();//NEW ; Create New Instance must return back the interface.
 }//NEW
 if(variable.type == String) then variable.value = RandomString();
 if(variable.type == Number) then variable.value = RandomNumber();
 if(variable.type == Date) then variable.value = RandomDate();
}

So what this code demonstrates is the ability to exclude fields in any given class by attaching some meta data to it, which any functionality can look at but doesn't have to. In the high score class we marked the created date variable as something we didn't want to set, maybe because the constructor sets the date and we want to check that first. The second thing we did was we set a class to create the levelName. The levelName might have a custom requirement that it follow a certain format. Having a random String would not do for this, so we created an annotation that takes in a class which will generate the value.

Now we could have a different custom annotation for each and every custom value type, but that would defeat the purpose of make this as generic as possible. Instead, we use a defined pattern which could apply to several different variables. For example, gameName also had to follow a pattern, but it was different from the levelName pattern. You could create another class called gameNamer and as long as it followed the same design (had a method called "Value()" that returned a string), you could just use the CreateWith(Class=X) annotation and they would act the same. This means you would not need to add another case in the testVariableSetup method or even change it. In Java and C# the mechanism for this is a common ancestor which can be either an interface or an abstract class. That is to say, they both inherit from the same abstract class or implement the same interface. For the sake of completeness and to help make this make sense, I have included an updated pseudo code examples below:

class HighScore {
 String playerName;
 int score;
 @Exclude()//NEW
 Date created;
 int placement;
 @CreateWith(Class = GameNamer.class)//NEW
 String gameName;
 @CreateWith(Class = LevelNamer.class)//NEW
 String levelName;
 //...Who knows what else might belong here.
}
// All NEW below:

class ICreateWith { String Value(); }
class GameNamer implements ICreateWith { String Value() { return "Game # " + RandomNumber(); }
class LevelNamer implements ICreateWith { String Value() { return "Level # " + RandomNumber(); }

//In some class
class some { ICreateWith CreateNewInstance(Class class) { return (ICreateWith)class.new(); } }

TestNG - Complex but powerful

One last example that is a little more of a real life example that is common in the testing world. Although I think the code might be a little too complex to get into here, I want to talk about a real life design and how it works in general. TestNG uses annotations with an interesting twist. Say you have a "Group", a label saying that this is part of a set of tests. Well, perhaps you have a Group called "smoke" for the smoke tests you want to run separate from all the others. TestNG might support filtering, but between TestNG and Maven and ... you decide you want to do determine the filtering of tests at run time using a flag somewhere (database, environment variable, wherever) that says "run smoke only". During run time, TestNG calls an event saying "I'm about to run test X, here is all the annotation data about it. Would you like to change any of it?" At this point you can read the Group information about the test. If your flag says smoke only, you can then check the groups the test has. If the Group list does not have smoke in it, you set the test to enabled=false, changing the annotation's data at run time. TestNG calls this Annotation Transformations. I call it cool.

The weird part is that you are modifying data at run time annotation data that is hard coded at compile time.  That is to say, annotation values cannot be actually changed at runtime*, but a copy of the instance of them can be.  That is what TestNG actually changes from what I can tell.

If you are reading this and saying, this topic is rather confusing, don't feel too badly. I know it is confusing. The TestNG part in particular is a bit mind bending.  And to be clear, I don't see myself as an expert.  There are way more complex ideas out there that just amaze me.

* This is from what I can tell.  Perhaps there are reflective properties to let you do this.  You can however override annotations through inheritance, but that is a more complex piece.

Wednesday, October 16, 2013

Reflections


I have been recently been reading over some of Steve Yegge's old posts and they reminded me of a theme I wanted to cover.  There is a idea we call meta-cognition, that testers often use to defocus and focus, to occasionally come back for air and look for what we might have missed.  It is a important part of our awareness.  We try to literally figure out what we don't know and transfer that into a coherent question(s) or comment(s).  Part of what we do is reflect on the past, using a learned sub-conscious routine and attempt to gather data.

In the same way, programming too has ways of doing this, in some cases, in some frames of reference.  This is the subject I wish to visit upon and consider a few different ways.  In some languages this is called reflections, which uses a reification of typing to introspect on the code.  Other languages allow other styles of the same concept and they call them 'eval' statements.  No matter the name, the basic idea is brilliant.  Some computer languages literally can consider things in a meta sense intelligently.

Reflections

So lets consider an example. Here is the class under consideration:
class HighScore {
 String playerName;
 int score;
 Date created;
 int placement;
 String gameName;
 String levelName;
 //...Who knows what else might belong here.
}

First done poorly in pseudo code, here is a way to inject test variables for HighScore:
function testVariableSetup(HighScore highScore) {
highScore.playerName = RandomString();
highScore.gameName = RandomString();
highScore.levelName = RandomString();
highScore.score = RandomNumber();
highScore.create = RandomDate();
//... I got tired.
}

Now here is a more ideal version:
function testVariableSetup(Object class) {
for each variable in class.Variables {
 if(variable.type == String) then variable.value = RandomString();
 if(variable.type == Number) then variable.value = RandomNumber();
 if(variable.type == Date) then variable.value = RandomDate();
}

Now what happens when you add a new variable to your class?  For that matter, what happens when you have 2 or more classes you need to do this in?  The first version can be applied to anything that has Strings, Dates and Numbers.  Perhaps we are missing some types, like Booleans, but that doesn't take too much effort to get the majority of the simple types.  Once you have that, you only have to pass in a generic object and it will magically set all fields.  Perhaps you want filtering, but that too is just another feature in the method.

The cool thing is, this can also be used to get all the fields without knowing what the fields are. In fact, this one is so simple, I am going to show a real life example, done in Java:

//import java.lang.reflect.Field;
//import java.util.*;

 public static List<String> getFieldNames(Object object) {
  List<String> names = new ArrayList<String>();
  for(Field f : object.getClass().getFields()) {
   f.setAccessible(true);
   names.add(f.getName());
  }
  return names;
 }

 public static Object getFieldValue(String fieldName, Object object) {
  try{
   Field f = object.getClass().getDeclaredField(fieldName);
   f.setAccessible(true);
   return f.get(object);
  }catch (Throwable t) {
   throw new Error(t);
  }
 }

 public static Map<String, Object> getFields(Object object) {
  HashMap<String, Object> map = new HashMap<String, Object>();
  for(String item : getFieldNames(object)) {
   map.put(item, getFieldValue(item, object));
  }
  return map;
 }

Lets first define the term "Field."  This is a Java term for a variable, be it public or private.  In this case, there is code to get all the field names, get any field value and get a map of field names to values. This allows you to write really quick debug strings by simply automatically reflecting any object and spitting out name/value pairs. Furthermore, you can make it so that it would filter out private variables, variables with a name like X or rather than getting fields, use it to get properties rather than variables.  Obviously this can be rather powerful.


Eval

Let me give one other consideration of how reflective like properties can work.  Consider the eval statement, a method of loading in code dynamically.  First, starting with a very simple JavaScript function, let me show you what eval can do:

  var x = 10;
  alert( eval('(x + 2) * 5'));

This would display an alert with the value 60. In fact, an eval can execute any amount of code, including very complex logic. This means you can generate logic using strings rather than hard code it.

While I believe the eval statement is rather slow (in some cases), it can be useful for generating dynamically generated code.  I'm not going to write out an exact example for this, but I want to give you an idea of a problem:

for(int a = 0; a!=10; a++) {
  for(int b = 0; b!=10; b++) {
    //for ... {
      test(X[a][b][...]);
    //}
  }
}

First of all, I do know you could use recursion to deal with this problem. That's actually a hard problem to solve, hard to follow and hard to debug. If you were in a production environment, maybe that would be the way to go for performance reasons, but for testing, performance is often not as critical. Now imagine if you had something that generated dynamic strings? I will again attempt to pseudo code an example:

CreateOpeningsFor(nVariables, startValue, endValue) {
  String opening = "for(a{0} = {1}; a{0}!={2}; a{0}++) {";  
  String open = "";
  for(int i = 0; i!=nVariables; i++) {
    open = open + "\n" + String.Format(opening, i, startValue, endValue);
  }
  return open;
}


eval(CreateOpenFor(5, 0, 10) + CreateFunctionFor("test", "X", 5) + CreateCloseFor(5));
//TODO write CreateFunctionFor, CreateCloseFor...  
//Should look like this: String function = "{0}({1}{2});" String closing = "}";

While I skipped some details, it should be obvious to someone who programs that this can be completed. Is this a great method? Well, it does the same thing as the hard coded method, yet it is dynamically built, thus is easily changed. You can log the created function even and place it in code if you get worried about performance. Then if you need to change it, change the generator instead of the code. I don't believe this solves all problems, but it is a useful way of thinking about code. Another tool in the tool belt.

The next question you might ask is, do I in fact use these techniques? Yes, I find these tools to be invaluable for some problem sets. I have testing code that automatically reflects and generates lots of different values and applies them to various fields or variables. I have used it in debugging. I have used it to create easily filterable dictionaries so I could get all variables with names like X. It makes the inflexibly type system into a helpful systems. I have even used it in creating a simple report database system which used reflections to create insert/update statements where the code field names are the column names in the database. Is it right for every environment? No, of course not, but be warned, as a tool it can feel a little like the proverbial hammer which makes everything look like nails. You just have to keep in mind that it is often a good ad-hoc tool, but not always a production worthy tool without significant design consideration. That being said, after a few uses and misuses, most good automators can learn when to use it and when not to.

Another reasonable question is what other techniques are useful and similar to this? One closely related tool would be regular expressions, a tool I don't wish to go in depth on as I feel like there is a ton of data on it. The other useful technique is known as annotations or attributes. These are used to define meta data about a field, method or class. As I think there is a lot more details to go over, I will try to write a second post on this topic in combinations with reflections as they are powerful together.

Wednesday, September 4, 2013

Pex, a GTAC presentation

Having watched about Pex in the GTAC 2013 presentations, I thought it was a rather interesting technology.  The basic idea is that you tell Pex to generate test cases for a particular method and it will find bugs in your implementation.  It tries to be relatively exhaustive within equivalence classed systems.  It really is only meant for unit testing, however I could imagine it being used to generate interesting test cases on a wider basis.

The interesting thing outside of the particular testing technology is that it can also be used as a game, training tool or for interviewing.  In particular, you can take a previously created secret implementation, and ask for someone to re-implement the solution.  Since Pex knows how to test compiled .net code, Pex just finds interesting values based upon the secret implementation (E.G. the oracle) and then sees if the output it creates equals the output you generate for the same value.  If they are not equal, your implementation is wrong.  Since you can retry almost instantly it becomes relatively easy to generate possible solutions until your implementation matches that of the secret implementation.  Having tried a good number of the challenges, I must say my current favourite is making English words plural.  While this isn't exactly how I generate automated test cases, it certainly made me think about other possible input values I should be testing.

Let me know if you solve it in the comments and if you want, post your own solution.  I have a solution, but I'll resist posting it for now.  In some future point (maybe a few weeks from now), I'll post a comment with my solution.