.NET Goes Open Source

Some big news in the .NET/Microsoft world today: the .NET runtime (roughly equivalent to the JVM in the Java world) will now be fully open source (MIT License) and cross-platform. This means .NET will be fully supported on the Mac and Linux. It looks like the aims of Mono, the current community-driven cross-platform open source implementation of .NET, will now be fully realized as part of the core .NET platform. Mono is primarily leveraged for cross-platform mobile app development with the Xamarin ecosystem. Microsoft’s further partnership with Xamarin means that .NET is one of the most compelling mechanisms for cross-platform development today.

I’m excited to see Microsoft embrace open source. The effective patterns and practices that we’ve seen emerge and evolve in the open source ecosystem, regardless of platform, can be lost on individual enterprises or proprietary vendors. It’s great to see Microsoft seeing the value and effectiveness of these patterns and aligning their efforts with how developers work, not the other way round.

I’m also happy to see a desire on Microsoft to deliver the best solutions on any platform, not just the BYO-platform attitude that we started to see a year or two ago (which was still revolutionary for Microsoft).

News round-up:

The State of Front-End Package Management

Scott Hanselman just introduced support for front-end build tools grunt, gulp, bower, and npm on his blog. Visual Studio is going to have support for front-end build tooling and package management. As Scott mentions, there still is possible use cases for using NuGet to distribute front-end assets.

However, I think not using any package management for front end assets is OK for the most part. I’m actually not a big fan of tools like require.js either. While it encourages people to structure their JavaScript better, we shouldn’t be wrapping all our JavaScript code in framework specific “define” methods. Now your entire app is coupled to require.js. Justin Searls talks about this in the latest episode of the changelog: http://thechangelog.com/128/.

Just because NuGet is a lifesaver and huge optimizer for managing third party dependencies doesn’t mean tools like bower will solve your front-end dependency management woes. For one, pulling in new JavaScript is not hard. Perhaps it is too easy. Second, if you have to use a tool to manage your dependencies in JavaScript – you’re probably running a little fat on the client anyway and need to go on a diet. Lastly, I don’t think the trend will ever go away from directly including third-party JavaScript, as opposed to maintaining a “lib” folder for .dll dependencies.

Goals vs. Objectives – The Secret Ingredient That Explains The Difference

Why is there confusion about the difference between goals and objectives?

A colleague and I were recently collaborating on a new effort, and there was some confusion on the meaning of goals vs. objectives. To him, the words were interchangeable – perhaps a concern of formal semantics. Indeed, we could include other terms in our discussion: outcomes, benefits, mission, vision, purpose, etc. The nuances of how these terms relate is varied.

Why is this the case? First, in our initial exposure to these concepts, our responsibilities and tasks are more or less defined, ideally with a correlated goal or objective. Sometimes, while our tasks and responsibilities may be defined, our organization or environment may lack any clear sense of purpose. Perhaps most damaging, our organization or environment may have defined goals or objectives, but lacks the accountability or discipline to act in alignment with them. This is a failure of integrity. In these contexts, any goal or objective can provide the necessary orientation and direction on a daily or weekly basis.

Second, often there are personal or organizational challenges that overshadow any concern that would meaningfully differentiate a goal with an objective. Even in a position of management or leadership, one’s role can simply be that of steering and communication in relationship to stated goals and objectives. Other concerns can quickly overwhelm.

A simple search can return a number of different interpretations on the difference between goals and objectives, some of which can be helpful. But there is a nagging feeling that it should be ‘common sense’. Why should a particular blog post or book be necessary to illuminate the difference, especially to something that can have a huge impact on the direction and effectiveness of one’s efforts?

Here is the secret ingredient: your team. Your team should have a clear and “common sense” model that encapsulates goals, objectives, outcomes that serve its purpose. Depending on the size of the team (it could be just you), or whether it is a part of a larger effort or organization (or serving/partnering with other teams), different components of these orienting and decision factors may be inherited, shared, tweaked, emphasized, etc. But for goals and objectives to be effective, they must be shared, and there must be a shared understanding for how they work together – and how they work together.

It really doesn’t matter too much what the individual definitions are. As long as you have a shared or model/process is what matters. Dr. James T. Brown puts it something like this: 1) have a process, 2) follow the process, and 3) improve the process. The model or definition for goal or objectives should be “common sense” and provide just enough definition necessary to improve the accountability and discipline of an effort to improve. What does matter is that the definitions are shared. Without a shared understanding, accountability and discipline will suffer.

At an individual level, this means “managing oneself”. Have a disciplined intentional approach for fulfilling your responsibilities. If you are a member of an organization, using shared models and definitions is one way you can increase accountability, facilitate disciplined execution, and encourage organizational integrity. If you are on the leading edge of an effort that requires an enhanced program or project management, seek to partner with others with the same challenges to mature the shared ethos that will build a stronger organization capable of meeting its goals and objectives – whatever their definitions happen to be.

Freelancing Early In Your Software Development Career

My advice to most freelancers early in their career: don’t do it long-term. There is a definite long-term limiting factor when it comes to freelancing that impacts your professional (craft) and career development. This is particularly true in any industry where it takes a team deliver a product or service.

Freelancing is a tremendous learning experience. I freelanced throughout most of college and was able to put into practice what I was learning on campus and in the classroom every day. After college, I did a stint at small software firm and then a start-up. With leads from my previous freelance work, I started out solo again after the start-up was acquired. For nearly the next two years, I worked for myself, doing hourly work and managing a few small fixed-bid projects with some other freelancers that brought in a nice profit. Making the equivalent of someone with a ‘salaried’ job in my industry with 10 years more experience, I was fortunate with the opportunities given to me.

I was billable 60% of my time. I had a 30 hour work-week. I technically had more capacity, but I took Gerald Weinberg‘s advice to do client work for three days a work-week and use the other two days for learning and getting more business. It was a great lifestyle.

I had also matured through personal experience and with the encouragement of  others to realize that my calling was to build software.  I love building software. Freelancing has many connotations and meanings, but it is not a vocation. It is not a calling. (Neither is “working 9-5″.) The real reason I left freelancing was the answer to this question: how do I build better software?

I realized it takes a team to build good software. I saw my limits. So, I decided to join a team (turned out to be a great team). Measuring my decision financially, I took a drastic cut compared to my freelance income (but I’m still thriving). However, there is no doubt that what I’ve gained in experience, knowledge, and opportunities for investment and growth in my craft and professional community have been incomparable to anything I could have achieved on my own.

I’ve left out many “why’s” and “how’s” of freelancing or what it’s been like since I’ve joined Pariveda Solutions or why I chose consulting – and Pariveda – in particular. Perhaps I will post further, but feel free to reach out if you have any questions.

I discovered there were natural limits to what freelancing has to offer, along with a realization of the doldrums and danger of blind self-reliance that working on one’s own can bring. While we have the rare luxury of a profession that can give us financial and ‘lifestyle’ success on our own, pursuing excellence in our craft will ultimately raise the level of commitment and community with those that we work with and for.

Recent Azure Highlights – November 2013

Earlier this week Scott Guthrie had a another round of announcements (just two weeks since the last round) of Azure awesomeness . Highlights included:

  • Web Sites now support remote debugging
  • WebSocket support for Azure Web Sites
  • Continuous delivery support for Git in TFS

Some takeaways for me are that Azure Web Sites are receiving the same TLC as traditional Azure Cloud Services (web and worker roles). Also, Git is becoming a first class citizen in the TFS/Microsoft world. Git provides key capabilities that allows enterprises to be more collaborative than traditional TFS version control and aligns with open source development practices that foster multiple contributors across projects.

In other Azure news, Microsoft has released the Windows Azure Scheduler. It provides the often needed capability of invoking services or messages on simple or complex recurring schedule. This eliminates the need to rely on third party services or bootstrapping a service to use a task scheduler library. Sandrino Di Mattia has an excellent overview of the service here: http://fabriccontroller.net/blog/posts/a-complete-overview-to-get-started-with-the-windows-azure-scheduler/

.NET/Azure Folks: Learn Reactive Programming

On November 4th, the course Principles of Reactive Programming will start on Coursera: https://www.coursera.org/course/reactive. While the course is not taught in .NET, the principles are platform agnostic but is particularly targeted at building distributed and scalable systems. You can learn more about the characteristics of these systems here: http://www.reactivemanifesto.org. One of the course instructors is Erik Meijer, a former architect at Microsoft with large responsibilities in the development of LINQ and Reactive Extensions.

Current guidance on building distributed systems on Azure focuses largely on more infrastructural concerns, such as using queues or Azure Service Bus for distributing work. (See Clemens Vasters’ video blog on Channel9 and course on Pluralsight for excellent guidance in this area). This will be changing as frameworks and tools emerge that embrace the distributed nature of computing, much in the same way frameworks like Entity Framework provides a programmatic model for the database, or Web API gives a model for HTTP.

These efforts are already well underway at Microsoft. Microsoft internally uses Reactive Extensions for message processing on Azure, as evidenced by their CloudFx framework. They are also developing actor model libraries similar to Akka and Erlang that provide a simple programming model for programming distributed systems. Code-named “Orleans”, a framework similar to Akka has been in the works by Microsoft Research as far back as 2010, and is used in production by the Halo team for building real-time gaming services. ActorFx, an Actor Framework for Windows Azure, is another effort in this area to provide a language independent model for distributed systems.

While all these frameworks are at various levels of maturity (or availability), the consensus is clear in the community for the need to develop a programmatic model for building resilient and scalable systems to take advantage of both the increasing prevalence of distributed systems in the Cloud, as well as the increasing number of cores available on today’s processors. If you are building systems that require resiliency and/or scalability in a distributed or multi-core environment, understanding the patterns and solutions available to build these systems is essential to good software engineering. Of these frameworks and libraries, Reactive Extensions is already a success story, being adopted both within Microsoft and the wider community. Recent examples include GitHub’s most recent release of their .NET client features a Reactive companion package, and Netflix bringing the same concept into the Java world with RxJava, using it heavily with their APIs.

Instead of re-inventing the wheel every time we come up against these challenges, let’s learn and work together to build and master the tools needed to address and achieve the potential of today’s distributed environment.

Update: In addition to the Coursera course on reactive programming principlesErik Meijer is offering a live course focused solely on Reactive Extensions (Rx). Sign up at https://rx.yapsody.com.

Links/Resources

Reactive Programming

Principles of Reactive
Programming on Coursera

Reactive Manifesto

Reactive Extensions

Reactive Extensions
Project Home
GitHub: Rx.NET and RxJS and more
DNR: Reactiv Extensions
Functional Reactive Programming in the Netflix API
Learn Rx in SF with Erik Meijer himself: https://rx.yapsody.com/    

Microsoft “Orleans”

Orleans Project Home
Orleans Channel9 Team Interview
Microsoft opens early adopter program for its ‘Orleans’ cloud framework
Orleans: Cloud Computing for Everyone – ACM Symposium on Cloud Computing (SOCC 2011)

CloudFx

Understanding and Using the CloudFx Framework

CloudFx Samples

CloudFx on NuGet

ActorFx

ActorFx Project Home

Programming
the Cloud with Actors: Inside ActorFx

Akka

 

What I Learned at Pablo’s Fiesta: A Better (and testable) Entity Framework Story with Highway.Data

Last weekend I attended Pablo’s Fiesta, an open spaces conference in Austin, Texas. There were some great sessions and discussions from everything from working remotely for Github, contributing to open source, to using NuGet at in the enterprise.

One particularly interesting session that I attended was a review of a framework that had been on my radar: Highway.Data.
It provides a consistent, clean and testable interface for data access around Entity Framework, NHibernate, or RavenDb using
an opinionated approach to data access patterns.

Highway.Data is composed of a Repository, Unit of Work (data context), and command/query object patterns for reading and manipulating data. It nicely separates the core framework from the ORM adapters such as Highway.Data.EntityFramework.

Here is an excellent run-down of the patterns and quality attributes the framework uses to guide its development: http://hwyfwk.com/blog/2013/10/19/understanding-the-patterns/. One thing that is relatively uncommon in most data access libraries here is the concept of a Query object. Refer to the aforementioned post for more discussion on that and the benefits you get from using the pattern.

This is likely going to be my go-to for using when building out projects with Entity Framework that need a good testing story.

Links