Quantcast
Channel: Jerrie Pelser's Blog RSS Feed
Viewing all 317 articles
Browse latest View live

Adding parameters to the OpenID Connect Authorization URL

$
0
0

I am busy working on some more samples for ASP.NET Core to demonstrate various techniques people can use Auth0 to authenticate their users. In most of our samples we use the standard OpenID Connect middleware, and one of the things I wanted to do was to pass extra parameters when the request is made to the Authorization endpoint.

At Auth0 we allow users to authenticate with multiple social and Enterprise providers. Usually when the Authorization endpoint is called, we will display Lock which will promt the user for their username and password, and also allow them to sign in with any of the connected social or enterprise providers.

We can however also directly invoke any of the social connections, bypassing Lock completely and directing the user directly to the Authorization page for the relevant service. So as an example we can send the user directly to the Google login by passing along the query string parameter connection=google-oauth2.

So how do you do this when using the OpenID Connect middleware?

All you need to do is handle the OnRedirectToIdentityProvider event when configuring the OpenIdConnectOptions, and add the exta query string parameters by calling the ProtocolMessage.SetParameter method on the supplied RedirectContext

app.UseOpenIdConnectAuthentication(newOpenIdConnectOptions("Auth0"){// Set the authority to your Auth0 domain
Authority="https://YOUR_AUTH0_DOMAIN",// Configure the Auth0 Client ID and Client Secret
ClientId="CLIENT ID",ClientSecret="CLIENT SECRET",// Do not automatically authenticate and challenge
AutomaticAuthenticate=false,AutomaticChallenge=false,// Set response type to code
ResponseType="code",// Set the callback path 
CallbackPath=newPathString("/signin-auth0"),// Configure the Claims Issuer to be Auth0
ClaimsIssuer="Auth0",Events=newOpenIdConnectEvents{OnRedirectToIdentityProvider=context=>{context.ProtocolMessage.SetParameter("connection","google-oauth2");returnTask.FromResult(0);}}});

Now the user will be sent directly to the Google login page whenever the OIDC middleware is invoked.

This however means that the user will always be directed to sign in with their Google account. What if we want to make this configurable somehow?

At the moment the Login action in the AccountController which issues the challenge to the OIDC middleware looks as follows:

publicIActionResultLogin(){returnnewChallengeResult("Auth0",newAuthenticationProperties(){RedirectUri="/"});}

What we need to do is add a connection parameter to the Login action and then if the user passed in a value for that parameter we can add it to the Items dictionary of the AuthenticationProperties instance which is passed along with the challenge:

publicIActionResultLogin(stringconnection){varproperties=newAuthenticationProperties(){RedirectUri="/"};if(!string.IsNullOrEmpty(connection))properties.Items.Add("connection",connection);returnnewChallengeResult("Auth0",properties);}

And then also change the OnRedirectToIdentityProvider delegate to check if the connection property was passed along, and if it was, append the value to the ProtocolMessage parameters:

app.UseOpenIdConnectAuthentication(newOpenIdConnectOptions("Auth0"){// Set the authority to your Auth0 domain
Authority="https://YOUR_AUTH0_DOMAIN",// Configure the Auth0 Client ID and Client Secret
ClientId="CLIENT ID",ClientSecret="CLIENT SECRET",// Do not automatically authenticate and challenge
AutomaticAuthenticate=false,AutomaticChallenge=false,// Set response type to code
ResponseType="code",// Set the callback path 
CallbackPath=newPathString("/signin-auth0"),// Configure the Claims Issuer to be Auth0
ClaimsIssuer="Auth0",Events=newOpenIdConnectEvents{OnRedirectToIdentityProvider=context=>{if(context.Properties.Items.ContainsKey("connection"))context.ProtocolMessage.SetParameter("connection",context.Properties.Items["connection"]);returnTask.FromResult(0);}}});

Now, when you go to http://YOUR_URL/Account/Login, the OIDC middleware will get invoked and Auth0 Lock will be display as always. However if you go to http://YOUR_URL/Account/Login?connection=google-oauth2 then the user will be sent directly to the Google authorization page. Likewise, if you go to http://YOUR_URL/Account/Login?connection=github, the user will be sent directly to the GitHub authorization page.


Using Roles with the ASP.NET Core JWT middleware

$
0
0

Here is a great find: The JWT middleware in ASP.NET Core knows how to interpret a “roles” claim inside your JWT payload, and will add the appropriate claims to the ClaimsIdentity. This makes using the [Authorize] attribute with Roles very easy.

This is best demonstrated with a simple example.

First of all I head over to JWT.io and create a JSON Web Token with the following payload:

{"iss":"http://www.jerriepelser.com","aud":"blog-readers","sub":"123456","exp":1499863217,"roles":["Admin","SuperUser"]}

Note the array of roles in the “roles” claim.

This is an HS256 token and signed with the secret “mysuperdupersecret”, as can be seen in the following screenshot:

In my ASP.NET Core application I am configuring the JWT middleware:

publicclassStartup{publicvoidConfigure(IApplicationBuilderapp,IHostingEnvironmentenv,ILoggerFactoryloggerFactory){varkeyAsBytes=Encoding.ASCII.GetBytes("mysuperdupersecret");varoptions=newJwtBearerOptions{TokenValidationParameters={ValidIssuer="http://www.jerriepelser.com",ValidAudience="blog-readers",IssuerSigningKey=newSymmetricSecurityKey(keyAsBytes)}};app.UseJwtBearerAuthentication(options);app.UseMvc();}}

When I make a request to my API with the JWT created above, the array of roles in the “roles” claim in the JWT will automatically be added as claims with the type http://schemas.microsoft.com/ws/2008/06/identity/claims/role to my ClaimsIdentity.

You can test this by creating the following simple API method that returns the user’s claims:

publicclassValuesController:Controller{[Authorize][HttpGet("claims")]publicobjectClaims(){returnUser.Claims.Select(c=>new{Type=c.Type,Value=c.Value});}}

So when I make a call to the /claims endpoint above, and pass the JWT generated before, I will get the following JSON returned:

[{"type":"iss","value":"http://www.jerriepelser.com"},{"type":"aud","value":"blog-readers"},{"type":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier","value":"123456"},{"type":"exp","value":"1499863217"},{"type":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role","value":"Admin"},{"type":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role","value":"SuperUser"}]

Where this gets really interesting is when you consider that passing Roles to the [Authorize] will actually look whether there is a claim of type http://schemas.microsoft.com/ws/2008/06/identity/claims/role with the value of the role(s) you are authorizing.

This means that I can simply add [Authorize(Roles = "Admin")] to any API method, and that will ensure that only JWTs where the payload contains the claim “roles” containing the value of Admin in the array of roles will be authorized for that API method.

publicclassValuesController:Controller{[Authorize(Roles="Admin")][HttpGet("ping/admin")]publicstringPingAdmin(){return"Pong";}}

This makes things very easy.

What makes this doubly interesting is that this works with the OpenID Connect middleware as well. So in other words, if the ID Token returned when you authorize a user using the OIDC middleware contains a “roles” claim, the exact samle principle applies - simply decorate the MVC controllers with [Authorize(Roles = "Admin")] and only users whose ID Token contains those claims will be authorized.

So bottom line: Ensure the “roles” claim of your JWT contains an array of roles assigned to the user, and you can use [Authorize(Roles = "???")] in your controllers. It all works seamlessly.

Running a specific test with .NET Core and NUnit

$
0
0

I converted the Unit tests for the Auth0.NET SDK to .NET Core. Currently the unit testing framework being used is NUnit, and NUnit 3 comes with a test runner for .NET Core.

You can make use of it by configuring your project.json as follows:

{"version":"1.0.0-*","dependencies":{"NUnit":"3.5.0","dotnet-test-nunit":"3.4.0-beta-3"},"testRunner":"nunit","frameworks":{"netcoreapp1.0":{"imports":"portable-net45+win8","dependencies":{"Microsoft.NETCore.App":{"version":"1.0.0-*","type":"platform"}}}}}

The configuration above is current as of the writing of this blog post. Please refer to the NUnit 3 Test Runner for .NET Core GitHub Page to obtain the must up to date informaton on how to configure it.

With this in place you can easily run your unit tests from the command line by simply running the command

dotnet test

This will however run all the tests in a particular assembly (except for the Explicit ones), but what if you want to run only a specific unit test?

Well for that you can refer to the documentation for the Console Command Line. According to that documentation, one of the parameters you can pass to the Console Runner is --test, which allows you to specify a comma-separated list of names of test to run.

You can also pass this --test parameter to the dotnet test runner, which it seems is then passing it on to the NUnit .NET Core Test runner. So for example, if I wanted to run the unit test Auth0.ManagementApi.IntegrationTests.UsersTests.Test_users_crud_sequence, I could execute the following command:

dotnet test --test Auth0.ManagementApi.IntegrationTests.UsersTests.Test_users_crud_sequence

And that will then only run that particular unit test:

Using Configuration files in .NET Core Unit Test Projects

$
0
0

So another thing I came across while converting the Integration tests for the Auth0.NET SDK to .NET Core was that I had to make use of configuration files which specify the settings so the Integration test can talk with Auth0.

Here are some of the basics which got it working for me…

Add the configuration file

First, add a client-secrets.json file to the Integration test project, e.g.

{"AUTH0_CLIENT_ID":"...","AUTH0_CLIENT_SECRET":"..."}

Configure the client-secrets.json file to be copied to the output directory by updating the buildOptions in the project.json file:

{"version":"1.0.0-*","buildOptions":{"copyToOutput":{"include":["client-secrets.json"]}},"dependencies":{"..."},"testRunner":"nunit","frameworks":{"net461":{}}}

Include the .NET Core Configuration NuGet package

Include the JSON Configuration file NuGet package (Microsoft.Extensions.Configuration.Json) in your project.json

{"version":"1.0.0-*","buildOptions":{"copyToOutput":{"include":["client-secrets.json"]}},"dependencies":{"...","Microsoft.Extensions.Configuration.Json":"1.0.0"},"testRunner":"nunit","frameworks":{"net461":{}}}

Be sure to run dotnet restore after you have added the package.

Use the configuration in your unit tests

You can now use the configuration file in your unit tests by using the ConfigurationBuilder class:

varconfig=newConfigurationBuilder().AddJsonFile("client-secrets.json").Build();

And then access any configuration value:

varclientId=config["AUTH0_CLIENT_ID"]

You can read more about how configuration works in .NET Core projects in the ASP.NET Core Configuration documentation

Managing Cookie Lifetime with ASP.NET Core OAuth 2.0 providers

$
0
0

I recently received a support request from a customer regarding the session lifetime once a user has signed in using Auth0 as they wanted the users to remain logged in across browser sessions. For our Auth0 integration with ASP.NET Core we have written no special middleware and instead rely on the standard OpenID Connect or OAuth2 middleware for authenticating users in MVC applications.

My initial response to the user was to simply configure the cookie middleware and specify an ExpireTimeSpan:

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
publicvoidConfigure(IApplicationBuilderapp,IHostingEnvironmentenv,ILoggerFactoryloggerFactory){// ...
app.UseCookieAuthentication(newCookieAuthenticationOptions{AutomaticAuthenticate=true,AutomaticChallenge=true,ExpireTimeSpan=TimeSpan.FromDays(7)});// ...
}

Turns out that does not do much. Even when specifying that option the cookie still only remains active for the duration for the session:

Cookie duration for lifetime of session only

Understanding what ASP.NET Core is doing

So I had a look at the Cookie Middleware documentation again and at the bottom of the document there is a section about “Persistent cookies and absolute expiry times”. It turns out that you have to specify the cookie persistence options when making a call to HttpContext.Authentication.SignInAsync(...).

The only catch with that is that when using only the OAuth 2.0 or OIDC middleware you never actually make the call to SignInAsync - instead the middleware does it for you automatically as can been seen in the source code for the RemoteAuthenticationHandler class - which is the base class for both the OAuthHandler and OpenIdConnectHandler classes of the OAuth 2.0 and OIDC middleware respectively.

Here is the relevant section of code of that class:

publicabstractclassRemoteAuthenticationHandler<TOptions>:AuthenticationHandler<TOptions>whereTOptions:RemoteAuthenticationOptions{protectedvirtualasyncTask<bool>HandleRemoteCallbackAsync(){// ...
context.Properties.Items[AuthSchemeKey]=Options.AuthenticationScheme;awaitOptions.Events.TicketReceived(context);if(context.HandledResponse){Logger.SigninHandled();returntrue;}elseif(context.Skipped){Logger.SigninSkipped();returnfalse;}awaitContext.Authentication.SignInAsync(Options.SignInScheme,context.Principal,context.Properties);// Default redirect path is the base path
if(string.IsNullOrEmpty(context.ReturnUri)){context.ReturnUri="/";}Response.Redirect(context.ReturnUri);returntrue;}}

So once the authentication with the remote authentication handler has occurred, the user will be signed in with the SignInScheme of the relevant AuthenticationOptions instance. You usually specify this SignInScheme when you configure the authentication services in the ConfigureServices method of your Startup class, e.g.

publicvoidConfigureServices(IServiceCollectionservices){services.AddAuthentication(options=>{options.SignInScheme=CookieAuthenticationDefaults.AuthenticationScheme;});// Add framework services.
services.AddMvc();}

The value of CookieAuthenticationDefaults.AuthenticationScheme is “Cookies”, which is the same default value which the Cookie Middleware uses when you register it using

app.UseCookieAuthentication(newCookieAuthenticationOptions{AutomaticAuthenticate=true,AutomaticChallenge=true});

So that is how the OAuth 2.0 / OIDC middleware signs the user into using the cookie authentication middleware. That means that on every subsequent request the cookie middleware authenticates the user.

Option 1: Configuring the AuthenticationProperties in the OnTicketReceived event

When you look at the code of RemoteAuthenticationHandler which I linked to above, you will see that a few lines above the call to SignInAsync there is a call to Options.Events.TicketReceived() which will fire the OnTicketReceived event in our middleware.

Passed as a parameter to that even is an instance of TicketRecievedContext which contains a property of type AuthenticationProperties called Properties which is ultimately passed in the call to the SignInAsync which is what the ASP.NET Core documentation said is where we should configure the cookie persistence options.

So now with all that knowledge, all we really have to do is to handle the OnTicketReceived event when registering our OAuth 2.0 (or OIDC) middleware, and set the correct values to make the cookie persistent for 7 days:

publicvoidConfigure(IApplicationBuilderapp,IHostingEnvironmentenv,ILoggerFactoryloggerFactory){//...
app.UseCookieAuthentication(newCookieAuthenticationOptions{AutomaticAuthenticate=true,AutomaticChallenge=true});app.UseGitHubAuthentication(newGitHubAuthenticationOptions{ClientId="...",ClientSecret="...",Scope={"user:email"},Events=newOAuthEvents(){OnTicketReceived=context=>{context.Properties.IsPersistent=true;context.Properties.ExpiresUtc=DateTimeOffset.UtcNow.AddDays(7);returnTask.FromResult(0);}}});// ...
}

And with that in place, when a user signs in to my application you can see that the cookie is now set to expire in 7 days (this blog post was written on 5 December 2016):

Cookie expires in 7 days

Option 2: Call SignInAsync ourselves in the OnTicketReceived

Looking at the source code for RemoteAuthenticationHandler again, there is another option which becomes evident. You will notice that there is an if statement which checks whether the HandledResponse property of the TicketRecievedContext was set when the OnTicketReceived event was handled. This suggest that we can in fact handle the sign in ourselves and then just indicate that we have done so.

Here is how we would do that:

app.UseGitHubAuthentication(newGitHubAuthenticationOptions{ClientId="...",ClientSecret="...",Scope={"user:email"},Events=newOAuthEvents(){OnTicketReceived=context=>{// Sign the user in ourselves
context.HttpContext.Authentication.SignInAsync(context.Options.SignInScheme,context.Principal,newAuthenticationProperties{IsPersistent=true,ExpiresUtc=DateTimeOffset.UtcNow.AddDays(7)});// Indicate that we handled the login
context.HandleResponse();// Default redirect path is the base path
if(string.IsNullOrEmpty(context.ReturnUri)){context.ReturnUri="/";}context.Response.Redirect(context.ReturnUri);returnTask.FromResult(0);}}});

Notice in the code above that if you want to go this route you will also be responsible for redirecting the user onwards after you have signed them in.

There is a 3rd option, and that is the route which ASP.NET Identity takes. It has 2 cookies, namely a “main” cookie which authenticates the user, and a second, intermediate cookie in which the user’s information is stored when they sign in using an external login provider such as any of the OAuth 2.0 or OIDC proviers.

Here is the basic priniciples of how this approach will work:

  1. 2 sets of cookie middleware is registered
    • One which is the “main” cookie (let’s call this one the Application Cookie) and which is the one that authenticates the user (AutomaticAuthenticate and AutomaticChallenge is set to true)
    • A second, temporary cookie (we’ll call this one the Remote Authentication Cookie) in which the login information received from the OAuth 2.0 provider will be stored
  2. The “default” SignInScheme for authentication will be set to the Remote Authentication Cookie. This means that the OAuth 2.0 middleware will sign in to the Remote Authentication Cookie and NOT the Application Cookie
  3. When the OAuth 2.0 middleware is challenged, we’ll instruct it to redirect to a new RemoteLoginCallback action after the user has authenticated with the OAuth 2.0 provider.
  4. This RemoteLoginCallback action will retrieve the user’s information from the Remote Authentication Cookie, and if it is successful it will manually sign the user in to the Application Cookie by making a call to HttpContext.Authentication.SignInAsync.

So in this version, this is my abbreviated Startup class:

publicclassStartup{publicvoidConfigureServices(IServiceCollectionservices){services.AddAuthentication(options=>{options.SignInScheme="RemoteAuthCookie";});// Add framework services.
services.AddMvc();}// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
publicvoidConfigure(IApplicationBuilderapp,IHostingEnvironmentenv,ILoggerFactoryloggerFactory){//...
app.UseCookieAuthentication(newCookieAuthenticationOptions{AuthenticationScheme="RemoteAuthCookie",CookieName="RemoteAuthCookie",AutomaticAuthenticate=false});app.UseCookieAuthentication(newCookieAuthenticationOptions{AuthenticationScheme="ApplicationCookie",CookieName="ApplicationCookie",AutomaticAuthenticate=true,AutomaticChallenge=true});app.UseGitHubAuthentication(newGitHubAuthenticationOptions{ClientId="e31c9807a8ce042118d2",ClientSecret="d8b883e189c7b6a1874e14f6d73c11d606b53d24",Scope={"user:email"}});app.UseMvc(routes=>{routes.MapRoute(name:"default",template:"{controller=Home}/{action=Index}/{id?}");});}}

And this is the AccountController class which handles logging the user in by challenging the OAuth 2.0 middleware, logging the user out, and also handling the callback from the OAuth 2.0 middleware which signs the user in to the main Application Cookie.

publicclassAccountController:Controller{publicasyncTask<IActionResult>Login(){// Construct the redirect url to go to the RemoteLoginCallback action
varredirectUrl=Url.Action("RemoteLoginCallback","Account",new{ReturnUrl="/"});// Ensure we are signed out of the remote cookie auth
awaitHttpContext.Authentication.SignOutAsync("RemoteAuthCookie");// Challenge the GH provider
returnnewChallengeResult("GitHub",newAuthenticationProperties(){RedirectUri=redirectUrl});}publicIActionResultLogout(){HttpContext.Authentication.SignOutAsync("GitHub");HttpContext.Authentication.SignOutAsync("ApplicationCookie");returnRedirectToAction("Index","Home");}publicasyncTask<IActionResult>RemoteLoginCallback(stringreturnUrl){varauth=newAuthenticateContext("RemoteAuthCookie");// Get auth ticket from remote cookie
awaitHttpContext.Authentication.AuthenticateAsync(auth);if(auth.Accepted){// Sign out of remote cookie once we used it
awaitHttpContext.Authentication.SignOutAsync("RemoteAuthCookie");// Sign the user in
awaitHttpContext.Authentication.SignInAsync("ApplicationCookie",auth.Principal,newAuthenticationProperties{IsPersistent=true,ExpiresUtc=DateTimeOffset.UtcNow.AddDays(7)});returnRedirectToLocal(returnUrl);}else{// If we don't have an external auth cookie, redirect to login action
returnRedirectToAction(nameof(Login));}}privateIActionResultRedirectToLocal(stringreturnUrl){if(Url.IsLocalUrl(returnUrl)){returnRedirect(returnUrl);}else{returnRedirectToAction(nameof(HomeController.Index),"Home");}}}

Please note that the official ASP.NET Identity code does some extra checks for XSRF, so do not assume my code is “production ready”. If you want to go with this approach please reference the ASP.NET Identity source code, and especially the code for the GetExternalLoginInfoAsync method which read the user’s information from the external cookie.

Source code

You can find full source code demonstrating all 3 different approaches at https://github.com/jerriepelser-blog/oauth-authentication-cookies

2016 in review and plans for 2017

$
0
0

I was looking through my plans for 2016 and realised how much in my life has changed this past year. I see many people complaining that 2016 was such a horrible year, but for me personally it was an amazing one.

This blog post is a chance to reflect on my 2016 and also set some general goals for 2017.

General overview of 2016

2016 was a great year for me. I started the new year still doing freelancing, but was offered a job with Auth0 early in January. I accepted the job offer, and I’ve had a great year working at Auth0.

I managed to visit quite a few countries in this past year: South Africa, USA (twice), Mexico, Vietnam, Taiwan, Thailand and Australia. In the end I made Bangkok my home and I settled there for most of the year.

Traveling took a bit of a back seat as I had some savings goals which I wanted to achieve, and I closed of the year reaching and exceeding my savings target.

Towards the end of the year I had a bit of a meltdown. Nothing major - just a general tiredness of being connected and online the whole time.

Being so involved in ASP.NET with my projects (ASP.NET Weekly, OAuth for ASP.NET, the open source projects I did, etc) just wore down on me, and I needed to rid myself of some of that baggage.

I closed down all of those projects, and this past 3 months without any of those commitments have been great.

I also killed my Twitter account. For me Twitter has become a cesspool of hate which reached new lows during the US elections. Time to fill my life with more positive and uplifting things.

Initially I thought that I wanted to rid myself of all the programming related stuff I was doing outside of my day job and find some other hobbies which has nothing to do with programming.

After some soul searching I realised that I actually do enjoy programming a lot, and want to spend a lot of my free time exploring programming related stuff. More on this later in this blog post.

My 2016 goals

Looking back at my plans for 2016 it was a bit of a mixed-bag of acievements…

Traveling

My plans for 2016 was to do a lot more travel - and I did. I started off in South Africa, moved over to the USA and then Mexico. Beginning of April I headed back to Vietnam and then settled in Thailand for the most of the year.

I did a trip back to the USA in August for our Auth0 company retreat. I also managed to fit in a trip to Taiwan, and currently I am in Australia visiting family.

My big plans I had to visit South America did not happen, but as a whole I am happy with the amount of travel I did.

Meeting up with blog readers

This did not happen…

Learning Spanish

I started off learning Spanish, but when I moved back to Asia this took a backseat. I did however switch over to learning Thai. More on this later.

Creating content and training materials

I actually did quite a lot of videos on my YouTube channel as well as a course on ASP.NET Core for Scotch School.

I realised however that making videos is not something I enjoy. I forced myself to do it because I thought my apprehension towards it was just fear and resistance, but the truth is I really do not enjoy it. And it is a lot of work.

During my “digital meltdown” towards the end of the year I also deleted my YouTube channel and rid myself of that baggage.

Bottom line is that making videos is something I will not pursue further. I tried it, but I do not enjoy it.

Generating Income

With the job at Auth0 this changed a lot. I have a steady income and am able to live very comfortably on my salary and still save a lot of money. I have no need to generate income from other sources.

Plans for 2017

So this leads me for my plans for 2017. 2 books I have read the past couple of months which have shaped a lot of my thinking are So Good They Can’t Ignore You and Deep Work, both by Cal Newport.

With the knowledge from these 2 books, as well as some soul searching I want to try something else in this year. My main thing is that I want to have fun in the programming I do. And I also want to stretch my mind and do deliberate practice.

With this in mind, my plans are to be very deliberate about learning new technologies. I want to achieve this by going through some courses, and also by defining a number of pet projects. Each of these will have very specific goals, and these goals will be shaped to learn a specific set of technologies.

No details yet on what these projects will be, but I will list them on my Now page.

So with that in mind here are my general goals for this new year:

  • Be deliberate about learning new technologies. Achieve this by doing specific training courses, as well as doing pet projects which have the specific goal of learning and applying these new technologies.
  • Do more travel. Bangkok will be my home base for the foreseeable future, but I want to visit more countries in Asia this year. On my list is Japan, China, South Korea and Russia.
  • Learn Thai. I must become much more focused about learning Thai, and want to achieve a general conversational proficiency by the end of the year.
  • Get back in shape. The last 2 months of 2016 has been good in this regard. I took up Muay Thai again and have been going to class consistently. I also want to mix in some strength work and try and finally reach that target goal of 82 kilograms again.

So those are the things which will shape my actions for this coming year.

Hope you all have a great 2017!

New project: Geotoast

$
0
0

Two books I have read at the end of 2016 which have shaped a lot of my current thinking are So Good They Can’t Ignore You and Deep Work, both by Cal Newport.

I picked up a lot of tips from these books, but for me the two main things which stood that I want to work on are the following:

  1. Doing deep work. In other words, I want to set aside time where I can focus intensely on one specific task, without being distracted by anything else.
  2. Deliberate practice (or learning). This is about identifying a set of skills or techniques at which you are lacking, and then focusing your learning specifically towards aquiring those skills. For me as a programmer this means identifying certain technologies or techniques which I feel are lacking and the focusing on aquiring the skills to use those technologies / techniques.

    One of the best ways to do deliberate practice (or learning) is by using a project based approach. So in other words, once I have identified those technologies, I create a real life project which makes use of those technologies and build it

So what do I want to learn?

So, with that in mind I have identified the first set of skills I want to aquire, as well as the project which will help me accelerate this learning process.

So for me the skills / technologies I want to learn are the following:

  1. Firstly, I am woking for Auth0 and building stuff which other developers use, but I have never actually built something using our product. I therefore do not understand some of the frustrations which developers have, and this is a problem. I will be building something using Auth0.
  2. The new API Authorization feature of Auth0
  3. ASP.NET Core, and specifically the following areas:
    • Authentication and Authorization (once again this is crucial for my job to understand the nuances of these)
    • Building an API, and all that goes with that - authorization, versioning, etc.
  4. JavaScript. Damn, my JS skills are sorely lacking
  5. One of the newer JS frameworks. Not 100% decided yet, but I am leaning towards Vue.js
  6. VS Core. I wanna become better at using it. So I will be using it, and not the full Visual Studio. Along with that, all the .NET command line tools. No more fancy IDE for me.

The project

The project I have identified which will help me learn these skills will be called GeoToast. Basically it is about displaying a popup message (or toast) to users on your website based on their geographical location.

The idea for this first came about some years ago while I was still doing the ASP.NET Weekly newsletter. I thought that as I was doing traveling around the world, it would be cool to meet up with developers in the cities and countries I was visiting. One other way I thought of getting in touch with them was but displaying a notification on my blog, so that when someone visited from say Taipei, and I knew I was heading to Taipei soon, it could for example display a message like the following:

Hey, I am headed to Taipei from 1 February to 14 February and would like to meet up with some of my readers. Inerested in meeting up? [Click Here]

That at a high level is the idea for the application. It will allow people to register on the GeoToast website and then create a list of messages which can be displayed to their website visitors based on the geographical location of that visitor. It will also allow them the embed a small JS file in their website which will make a call to the GeoToast API, and based on whether any of the message are applicable to that visitor’s location, it will display a nice notification somewhere on the screen to the visitor.

Conclusion

That is the very brief overview of the application. I will write more blog posts to share more details and things I have learned as I continue with the development of this project. Because I also want other developers to be able to learn from this, the project is open source and hosted on GitHub at https://github.com/RockstarLabs/GeoToast.

TWiG #1: JWTs, AutoMapper, FluentValidation, CreatedAtActionResult and pain with VS Code

$
0
0

Welcome to This Week in GeoToast (TWiG) #1. To keep myself honest with working on GeoToast, I thought it would be good to write a weekly progress of my work on the project, as well as the good and the frustrating things which I experienced this week.

The Good

JWT integration is super easy

To secure the API for GeoToast, I am using JSON Web Tokens (abbreviated as JWT and pronounced as “jot”). The JWT middleware in ASP.NET Core makes it super easy to secure APIs using JWTs, and a lot of my work at Auth0 has been focused on developing our Quickstarts and Samples using this technology. I am very familiar with it, and securing the API with the JWT middleware took seconds.

BTW, if you want to use JWTs to secure your own Web API, then please check out the following Auth0 quickstarts:

Also read up more about our upcoming API Authorization.

AutoMapper rocks on .NET Core

Kudos to Jimmy Bogard for his awesome work on AutoMapper. I am using it for mapping between the models used in my API and the Entity Framework models.

Integration is much easier than I remember it used to be with ASP.NET MVC and Web API before.

Install the Fluent Validation NuGet packages:

Install-Package AutoMapper
Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection

The AutoMapper package provides all the AutoMapper goodness, and the AutoMapper.Extensions.Microsoft.DependencyInjection package provides integration with the DI Framework.

Next, register AutoMapper with the DI container:

publicvoidConfigureServices(IServiceCollectionservices){services.AutoMapper();}

This will, among other things, scan your assemblies for mapping profiles to register and also register the IMapper interface with the DI container.

Next up you can go ahead and create your mapping profiles:

publicclassWebsiteProfile:Profile{publicWebsiteProfile(){CreateMap<Website,WebsiteReadModel>();CreateMap<WebsiteCreateModel,Website>();}}

And finally, to use AutoMapper in your controllers you can inject an instance of the IMapper interface in the constructor, and use that to map between classes:

publicclassWebsiteController{privatereadonlyGeoToastDbContext_dbContext;privatereadonlyIMapper_mapper;publicWebsiteController(GeoToastDbContextdbContext,IMappermapper){_dbContext=dbContext;_mapper=mapper;}[HttpPost]publicasyncTaskPost([FromBody]WebsiteCreateModelmodel){varwebsite=_mapper.Map<Website>(model);_dbContext.Websites.Add(website);await_dbContext.SaveChangesAsync();}}

Simple as that!

Fluent Validation available for ASP.NET Core

I was happy to see that Fluent Validation is available for ASP.NET Core. To use it, install the Fluent Validation NuGet package for ASP.NET Core:

Install-Package FluentValidation.AspNetCore -pre

Register Fluent Validation with the DI Container, and tell it to scan the assembly containing the Startup class for any validators:

publicvoidConfigureServices(IServiceCollectionservices){// Add framework services.
services.AddMvc().AddFluentValidation(fv=>fv.RegisterValidatorsFromAssemblyContaining<Startup>());}

Add validators:

publicclassWebsiteCreateModel{publicstringName{get;set;}publicstringUrl{get;set;}}publicclassWebsiteCreateModelValidator:AbstractValidator<WebsiteCreateModel>{publicWebsiteCreateModelValidator(){RuleFor(x=>x.Name).NotEmpty();RuleFor(x=>x.Url).NotEmpty();}}

And then all you need to do is check ModelState.IsValid in your controller to see whether there are model errors and return an appropriate result:

[HttpPost]publicasyncTask<IActionResult>Post([FromBody]WebsiteCreateModelmodel){if(ModelState.IsValid){// Save the model ...
}// Should probably return something better - like the actual errors? :P Will get to improving this
returnBadRequest();}

Hat tip to this StackOverflow answer

The CreatedAtActionResult result

I like the new CreatedAtActionResult which can be returned by API endpoints. Let’s say I have an endpoint which returns a single instance of a resource:

[HttpGet("{id}")]publicasyncTask<IActionResult>Get(intid){// Code omitted for brevity...
}

In my POST method which creates a new instance, I can then return a CreatedAtActionResult, and tell it that the resource which was just created can be found at the endpoint above.

So for example:

[HttpPost]publicasyncTask<IActionResult>Post([FromBody]WebsiteCreateModelmodel){if(ModelState.IsValid){// Code omitted for brevity...
returnCreatedAtAction("Get",new{id=website.Id},_mapper.Map<WebsiteReadModel>(website));}// Code omitted for brevity...
}

In Postman, when I POST to the endpoint defined above to create a new instance, it will return a Location header which indicates where the new resource can be located:

The Frustrating

No Resharper

I am very used to my Visual Studio and Resharper. I am finding it currently very frustrating to not be able to use some of the productivity features offered by it, and I believe it makes me a slower coder. I am sticking to VS Code though, and will report in the future on plugins which can replicate some of the VS+Resharper productivity goodness.

Breakpoints not hit in VS Code

I actually knew about this one before, but it tripped me up again. The problem is that with the default project template, the VS Code debugger will not break on breakpoints. The solution is simple - just add "debugType": "portable" to the "buildOptions" section of your project.json file

..."buildOptions":{"debugType":"portable","emitEntryPoint":true,"preserveCompilationContext":true},...

For more helpful information on debugger .NET Core apps with VS Code, you can also see this blog post.

Debugging in VS Code not loading configuration

When I debugged my application using VS Code, the web API calls I made in Postman all of a sudden started returning 500 (Internal Server Error) responses.

This is the debug output inside VS Code:

So the error I got was related to the JWT middleware which could not load the OIDC configuration from the OIDC Discovery document:

System.InvalidOperationException: IDX10803: Unable to obtain configuration from: ‘https:///.well-known/openid-configuration’.

The URL from which it tried to load the configuration is wrong. It should have tried to load it from https:/geotoast-dev.auth0.com//.well-known/openid-configuration. That value of geotoast-dev.auth0.com is read from the appsettings.json file which lead me to believe that for some or other reason, when running the application from the debugger, the appsettings.json file was not read.

A Google search led me to this StackOverflow answer which resolved my problem.

The problem was that my project was not located in the root of the folder which I loaded in VS Code. Instead, my folder structure looked as follows:

So my project is located in the \src\GeoToast folder. Following the advice in that StackOverflow answer, I changed the configuration in my launch.json file, so the cwd attribute points to the folder of my project:

VS Code IntelliSense inside event handlers not working properly..?

It seems that sometimes the Intellisense in VS Code will not list the available properties and methods for an object. I am trying to get a proper reproducible sample of the circumstances when this happens. When I do, I will post details.

Stay Tuned

Stay tuned for more status updates on GeoToast. The project is open source and hosted on GitHub at https://github.com/RockstarLabs/GeoToast.


Handling validation responses for ASP.NET Core Web API

$
0
0

I have been working on GeoToast and one of the things I needed to handle was returning a response when model validation fails when calling any of my API endpoints.

I am also using Fluent Validation for my model validation which I talked about in my previous post, but that has no bearing on this blog post. This blog post deals with ModelState and FluentValidation ultimately updates the ModelState so whether you are using normal data annotations attribute validation, or Fluent Validation, this will work the same.

The quick way

Thankfully the ASP.NET Filter Documentation contains a nice sample of how to do this. Basically all you need to do is to create a custom Action Filter which will check if the ModelState is valid, and if not it returns a BadRequestObjectResult containing the ModelState.

So create the attribute:

publicclassValidateModelAttribute:ActionFilterAttribute{publicoverridevoidOnActionExecuting(ActionExecutingContextcontext){if(!context.ModelState.IsValid){context.Result=newBadRequestObjectResult(context.ModelState);}}}

Hook it up to your Controller:

[Authorize][Route("api/properties")][ValidateModel]publicclassPropertiesController:Controller{// code omitted for brevity
}

Now everytime I make a call to one of the actions in that controller, and the model validation fails, it will automatically return 400 (Bad Request) response, and the body of the response will contain the errors, for example:

{"Url":["'Url' should not be empty."]}

Also read Steve Smith’s MSDN article on Real World ASP.NET MVC Filters

The custom way

This is however not exactly how I want my error responses formatted. I want a standard error response structure I would like to use throughout my API. For example, if an exception occurs in my application I would like to return something like the following:

{"message":"Problems parsing JSON"}

More importantly for the purpose of this blog post, if there is actual validation errors, I would rather want to return a 422 (Unprocessable Entity) and I want the body to be something like the following:

{"message":"Validation Failed","errors":[{"field":"Url","message":"'Url' should not be empty."}]}

So the structure will contain a message attribute which for validation errors will simply be “Validation Failed”. Then there is an errors attribute which contains an array of errors. Each error element contains the field to which the error relates to, as well as the message for the error.

In the case of model-wide validation errors there will be no field specified, so it can look something like the following:

{"message":"Validation Failed","errors":[{"message":"This is a model-wide error"},{"field":"Url","message":"'Url' should not be empty."}]}

So to do the was easy. First I created a class for the response I want to return:

publicclassValidationError{[JsonProperty(NullValueHandling=NullValueHandling.Ignore)]publicstringField{get;}publicstringMessage{get;}publicValidationError(stringfield,stringmessage){Field=field!=string.Empty?field:null;Message=message;}}publicclassValidationResultModel{publicstringMessage{get;}publicList<ValidationError>Errors{get;}publicValidationResultModel(ModelStateDictionarymodelState){Message="Validation Failed";Errors=modelState.Keys.SelectMany(key=>modelState[key].Errors.Select(x=>newValidationError(key,x.ErrorMessage))).ToList();}}

Notice the [JsonProperty(NullValueHandling=NullValueHandling.Ignore)] attribute on the Field property. This is to ensure that the field will not be serialized in the case of a null value - i.e. for model-wide validation errors.

The ModelStateDictionary from which I obtain the errors does not allow null values for the Key, but it does allow empty strings. I want to convert this to a null in the case of an empty string, so the Field property does not get serialized for empty field names. That is the check I do in the constructor for ValidationError.

Also note that in the constructor of ValidationResultModel I take the errors and I flatten it. For the ModelStateDictionary the Keys property will contain a key value, which is typically the property name, and the the Errors property for that key will contain all the errors related to that field.

I want to flatten this strucure to simple key-value pairs. So if a Key contains 2 Errors, it will be flattened to 2 ValidationError entries - one for each error.

The last thing that remains is to create my own custom IActionResult which I will return. I do not want to return BadRequestObjectResult because that returns an HTTP Status Code 400, and I want to return a 422 instead.

publicclassValidationFailedResult:ObjectResult{publicValidationFailedResult(ModelStateDictionarymodelState):base(newValidationResultModel(modelState)){StatusCode=StatusCodes.Status422UnprocessableEntity;}}

And the final step is to update my ValidateModelAttribute to instead return the new ValidationFailedResult:

publicclassValidateModelAttribute:ActionFilterAttribute{publicoverridevoidOnActionExecuting(ActionExecutingContextcontext){if(!context.ModelState.IsValid){context.Result=newValidationFailedResult(context.ModelState);}}}

And that’s all there is to it.

If you want to follow along as I develop GeoToast, be sure to subscribe to my RSS Feed and be notified on future post.

TWiG #2: Frustrations with .NET Core tooling and VS Code

$
0
0

Things I learned

Rename Refactor in VS Code

In last week’s update I alluded to the fact that I miss many of the refactorings which is offered by Visual Studio and Resharper. One good find this week is that VS Code at least has Rename Symbol functionality. As explained by the VS Code Documentation:

Some languages support rename symbol across files. Simply press F2 and then type the new desired name and press Enter. All usages of the symbol will be renamed, across files.

Would appreciate proper refactorings, but for now this one at least proved very helpful during the past week as I made some design changes.

global.json

I added a unit test project to my application, and obviously needed to reference the main project from the unit test project, and updated my project.json for the unit test project accordingly:

{"version":"1.0.0-*","buildOptions":{"debugType":"portable"},"dependencies":{"GeoToast":{"target":"project"},"System.Runtime.Serialization.Primitives":"4.3.0-preview1-24530-04","xunit":"2.1.0","dotnet-test-xunit":"1.0.0-rc2-192208-24"},...}

I was wondering how on earth the dotnet restore process will know where to access the GeoToast project, since it is obviously in a completely different folder. For reference, this is my folder strucure:

/GeoToast
|__/src
   |__/GeoToast
      |__Source Files
      |__project.json
|__/test
   |__/GeoToast.Tests
      |__Test Files
      |__project.json

And right on cue, I got an error message stating Unable to resolve ‘GeoToast’ for ‘.NETCoreApp,Version=v1.1’:

Turns out I should have read docs on Creating a Unit Test more thoroughly, since they state very clearly that you need to add a global.json file to the root folder of your application that contains the names of the src and test folders:

{"projects":["src","test"]}

That sorted the problem out and dotnet restore worked like a charm.

For more information check out the global.json reference.

MSBuild: With the switch to MSBuild this will change, so if you check this blog post a few months from now, this may not be applicable anymore. .notice–warning

Customising validation responses

I wanted to customise the responses I sent from the API when validation fails. I wrote about how to do this in Handling validation responses for ASP.NET Core Web API.

Things I found frustrating

Hiccups after installing Visual Studio 2017 RC

I installed Visual Studio 2017 RC for some other work I was doing, which also installed the .NET Core Tools RC3 with support for the new MSBuild support.

Initially when I tried to do a dotnet restore on my GeoToast project (which was still project.json based) I got the following error:

MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.

Sorting out this problem is easy. Just add the following to your global.json:

{"sdk":{"version":"1.0.0-preview2-003121"}}

That contrains dotnet to using the project.json-based tooling. Check out the “Side by side install” section in MSBuild tools announcement.

See this commit for the changes

Changing to MSBuild

After the change described above, I actually converted my projects over to the new csproj files. I used the approach described in the “Upgrading project.json projects” section of the MSBuild tools announcement.

It involved simply removing the sdk section I previously added to my global.json file and then running the dotnet migrate command as described in that document. I ran it from the root folder of my project and it picked up both my projects and converted them. Besides removing the project.json files, the migration process also removed the global.json file.

The whole process was seamless and both my main and test projects still worked fine after running dotnet restore for both of them. The dotnet watch command also seems to still be working.

See this commit for the changes.

Reaching my limit with VS Code?

I think I have reached my limit with VS Code. I set it as a clear goal with this project to learn VS Code, but I am feeling too unproductive. I can code much faster using Visual Studio + Resharper. While it is important to learn new things, I am also concious of the fact that I do not have much free time to work on this project. The little bit of time I have I need to be able to use as productively as possible.

With that in mind I am seriously considering foresaking VS Code in favour of Visual Studio 2017 + Resharper. Will let you know next week what I decided ;)

Plans for the coming week

Speaking of next week… I need to go to Japan for a month for work purposes, so I am probably going to have even less time. This coming week especially I need to sort out a few final thing before I leave Bangkok, then travel to Tokyo, and after that settle in in Japan.

So I am not sure how much time I will have this coming week for working on GeoToast.

The Japanese Business Card Exchange Ritual

$
0
0

I am in Tokyo, Japan for the next month assisting our Country Manager (Kiheita) as a technical resource on sales calls. Before I came to Japan, Kiheita requested me to get business cards printed.

Seeing as I work remotely and never interact with customers in person, this is not something I have ever needed while working for Auth0. As a matter of fact, I think the last time I had business cards was back in the 1990s in my first job as a computer sales person.

It is just not something a programmer ever really need. And in South Africa, whether you have a business card or not was never an issue - if someone ever needed my details I would just write it on a piece of paper, or take down their email address and send the required information to them.

Business Cards in Japan

So when Kiheita asked me to get business cards I first thought “what’s the big deal whether I have them or not?”, but I had some printed in any case. This morning when we left for the first meetings I grabbed a bunch of cards and put them in my jacket pocket, and just as well I grabbed a lot because oh boy, did I need them.

I quickly learned that exchanging business cards is a very important ritual in Japan. It is not just a simple handing over of business cards. Oh no, things happen in a very particular order.

Every thing needs to be done in a specific manner - how you present the card; how you receive the other cards; how you bow when you take the card; what you say when you hand over the card; how you respond when you receive the other person’s card; etc…

Also, I just grab a card from my jacket pocket to hand over. Not the Japanese. Oh no. They have dedicated card holders, made of fine leather or what have you.

Also, once you receive the cards and sit down again, you don’t simply pocket them. No, you lay them out on the table in front of you. Apparently so you can learn the names and show respect.

Conclusion

The little things about different cultures is why I love to travel. It really puts a smile to my face every time I notice or experience things like these. Also, this was the first time I visited another country with such a diverse culture in a business context, so I really appreciated seeing this.

Oh and another thing; when you are finished with a meeting, the people don’t just say goodbye in the meeting room. No, they walk you all the way to the elevator, and see you off there. All of them - everyone who was in the meeting. Such a cool thing :)

If you want to get more details on the nitty gritty of the Japanese business card exchange ritual, please check out this article. I’ll be studying it over the weekend to make sure I get the finer details correct ;)

TWiG #3: A week of travel

$
0
0

This week I have made little progress on GeoToast as I have traveled to Tokyo (where I will stay for the next month), and have also done a bit of sightseeing.

I am settled in, so this coming week should see progress again.

My plans for the coming week is to be able to serve the HTML/CSS/JS of a toast from my API, and then be able to inject it into the DOM of a web page.

Vue.js Learning Resources

$
0
0

For the client side development of Geotoast, I have decided to go for Vue.js rather than Angular or React. I have no specific rationale for it other than that from everything I have seen about Vue so far it just seems to make more sense to me than both Angular or React.

Vue does not get nearly the amount of attention that Angular or React does, but it still has a flourishing community and plentiful resources are available for learning Vue. It has also had amazing growth in 2016.

In this blog post I will take a look at some of the resources I have found so fair for aiding me in learning Vue.

I will be updating this blog post as I come across new resources of note.

The Official Website

The official Vue.js website is outstanding, and if that was the only resource available to you, you would probably be fine.

The Guide

The Vue.js Guide will walk you through all the concepts related to Vue. It starts with installing Vue and then goes on to a basic introduction and covering all of the core concepts such as the Vue Instance, the Template Syntax, Components, etc.

It also covers a wide range of advanced concepts such as transitions, mixins, plugins, etc.

API Documentation

The Vue API documentation looks to be very complete and contains information on all the Vue API, directives and built-in components.

By itself probably not a good resource to get started, but once you get going it is a good place to get more information on how to use various aspects of the API.

Examples

The Examples section of the website contains various embedded JSFiddles that demonstrates various aspects of Vue. It seems like a good place to learn about some of the aspects in isolation.

Videos and Courses

There is a wide range of free videos available on YouTube. Two that stand out is:

If you’re an Egghead fan (like me), they have a short video course available called Develop Web Apps with Vue.js.

You can also look at the Vue section of their website for more videos. I have also noticed that there are 2 upcoming courses called Build a Vue.js app with Vuex and Build a Server Rendered Vue.js App with Nuxt and Vuex.

The last one I want to mention - and the route I went - is a Udemy course called Vue JS 2 - The Complete Guide (incl. Vuex). It is a paid course (I got it at a discount for $15).

I decided on going with this as I liked the style of the instructor (from what I have seen on the preview videos). I also liked the project based approach he is taking to the course.

Other resources

There are many other resource available for learning Vue. One I would highly recommend you bookmark (or star) is Awesome Vue.

This contains a rich set of resources related to Vue. From official documentation, podcasts, examples and tutorials to get you started, to list of libraries, editors and other resources to smooth your development.

Conclusion

This is by no means an exhaustive list of Vue resources (for that see Awesome Vue), but it is good enough to get you going.

Compress Images using the TinyPNG CLI

$
0
0

I am busy doing a few SEO related optimizations on my blog and one of the actions I am taking is to compress (or shrink) all the images for my blog. I came across the TinyPNG CLI tool which allows you to easily compress all images from the Command Line.

This is a quick introduction to the tool.

Background

I am systematically going through my blog trying to identity SEO related issues and fix those. One of the areas I identified is that I never bothered to optimize the images (typically screenshots) which I use in my blog.

The total size of all the images on my blog was about 24MB.

The first tool I tried was a Windows tool called PNGGauntlet. It is a free Windows Utility, so I downloaded it and set it to work. The total time it took to run through all the images was around 1 hour and it brought the over all size down by about 8MB, so down to around 16MB for all the images.

At the same time I tried another utility on my Mac called ImageOptim. This one fared a little bit better, and after about an hour of work it reduced the overall size of all the images by about 9MB - so down to around 15MB.

TinyPNG

At Auth0 we always use TinyPNG to compress images we create for the documentation and tutorials. I tried that quickly on a few images and got much better compression ratios that either PNGGauntlet or ImageOptim.

The only problem was that the web interface allows you to do a maximum of 20 images at a time. It is also cumbersome because it means I have to upload each image individually, then download it after it has been compressed and finally copy it over the old image.

I noticed however that they have a Developer API available which gives you 500 free images a month.

Only I was not really in the mood for writing my own app that works with the developer API.

tinypng-cli

So I did a quick search for a TinyPNG CLI and came across tinypng-cli. It is a Node.js package, so make sure you have Node.js installed and then install the tinypng-cli package globally:

npm install -g tinypng-cli

Next, sign up for a Developer API Key which they will email to you in a few seconds.

Once you have the API key you can run the command line utility. Tell it to look in the current folder (.), and also all sub-folders (-r) and pass along the API Key you received in the -r option, e.g.

tinypng . -r -k YOUR_API_KEY

Compressing all 450 images took under a minute (as opposed to an hour with the desktop applications!), and the total size of the images came down from 24MB to 8.5MB. That is a saving of almost 16MB - more than double what either of the desktop applications achieved.

With the command line utility installed it is now much easier in future to quickly compress all images I do for the blog.

TWiG #4: A week of learning and messing around

$
0
0

This week I yet again made little progress on GeoToast. I started watching a Vue.js course on Udemy called Vue JS 2 - The Complete Guide (incl. Vuex). Made some progress on that, but I also spent a lot of my free time goofing off. Watching Netflix, surfing the web, etc…

So this coming week I need to focus on making major progress on the front-end of GeoToast. I will put aside watching the Vue.js course for now, and instead do “on-demand training” by looking up and learning about things as I run into issues.

Hopefully by next week I will have some good progress to show.


Getting started with .NET Core and AWS Lambda

$
0
0

This blog post will provide you with a brief introduction to using C# and .NET Core with AWS Lambda and also look at the different programming models available when using .NET Core with Lambda.

The reason why I started looking into this was because I wanted a dead simple hosting solution for GeoToast.

And yeah, it has been a while since I have written about GeoToast. I spent a month in Japan, and sightseeing was higher on my list of priorities than coding…

Serverless and AWS Lambda

AWS Lambda is the serverless product offered by Amazon Web Services. Serverless does not mean that there is no server involved - obviously there is - but just that managing servers, scaling, etc is not something that you need to worry about.

What you need to worry about (as programmer) is writing the code that executes the logic. You can then deploy it to a serverless environment such as AWS Lambda and it will take care of scaling etc. Some of the other offerings available are Azure Functions, Google Cloud Functions and also Webtask.

The other advantage is that serverless environments only charge you for the time your code takes to execute. So if you have a small POC you want to develop (like I do with GeoToast), this is ideal as you do not have to worry about paying for a server which sits idle 99% of the time.

Programming models for C# / .NET Core

I have only started playing around with AWS Lambda recently, but from what I can see so far there are basically 3 models which you can use when developing AWS Lambda functions using .NET Core and C#.

  • Plain Lambda Function. In this case you create a Lambda function which is a simple C# class with a function. This function can be triggered by certain events, or triggered manually from an application.
  • Serverless Application. In this case you can deploy an AWS Lambda function (or collection of functions) using AWS CloudFormation and front it with AWS API Gateway.

    You can also read more about this model in Deploying Lambda-based Applications.

  • ASP.NET Core app as Serverless Application. This is a variant of the model above, but in this instance your entire exiting ASP.NET Core application is published as a single Lambda function. This then utilizes the API Gateway Proxy Integration to forward all request coming through API Gateway to this Lambda Function.

    The Lambda function then hands the request off to the ASP.NET Core pipeline. So you have all the middleware you love at your disposal.

    This means that you basically have the normal ASP.NET Core request pipeline, but instead of IIS/Nginx and Kestrel, you have API Gateway and Lambda. So instead of this:

    You have this:

In the coming blog posts I will dive deeper into each of these.

So why AWS Lambda and not any of the others?

So I guess another question is why I am using AWS Lambda and not the offerings from Microsoft, Google or Auth0’s own WebTask? Well, for a couple of reasons which are true at the time of writing this blog post:

  1. It supports .NET Core - the others don’t. Google Functions isn’t even generally available yet.
  2. The 3rd programming model I mentioned above (ASP.NET Core app as Serverless Application) is one which interests me the most. I can take an entire ASP.NET Core app and publish it to Lambda. This means I can test my app locally with my normal development flow, and then simply push it to AWS to host it as a Lambda function.

Some resources to get you started

Also stay tuned, as I plan to do more posts on C#, .NET Core and Lambda in the future.

Compress images with C#, .NET Core, AWS Lambda and TinyPNG

$
0
0

In this blog post we will look at how you can create a simple AWS Lambda function in C# (and .NET Core) which will compress images uploaded to an S3 bucket using the TinyPNG API. The Lambda function will be configured to automatically be triggered whenever a new image is uploaded to the S3 bucket.

I am using Visual Studio 2017, so ensure you have downloaded and installed the Preview of the AWS Toolkit for Visual Studio 2017.

Sign up for TinyPNG

I will be using TinyPNG to compress the images from the Lambda function, so if you want to follow along, then please head over to the TinyPNG Developer website and sign up for an API Key.

Save the API Key you received after signing up, as you will require it later in this blog post to pass along when calling the TinyPNG API.

Creating the Lambda function

To get started create a new Lambda project in Visual Studio:

For the Lambda Blueprint you can select an empty function:

Next up install the NuGet packages we will require:

Install-PackageAmazon.Lambda.S3EventsInstall-PackageTinyPNG

The full code for the function is as follows:

publicclassFunction{privatereadonlystring[]_supportedImageTypes=newstring[]{".png",".jpg",".jpeg"};privatereadonlyAmazonS3Client_s3Client;publicFunction(){_s3Client=newAmazonS3Client();}publicasyncTaskFunctionHandler(S3Events3Event,ILambdaContextcontext){foreach(varrecordins3Event.Records){if(!_supportedImageTypes.Contains(Path.GetExtension(record.S3.Object.Key).ToLower())){Console.WriteLine($"Object {record.S3.Bucket.Name}:{record.S3.Object.Key} is not a supported image type");continue;}Console.WriteLine($"Determining whether image {record.S3.Bucket.Name}:{record.S3.Object.Key} has been compressed");// Get the existing tag set
vartaggingResponse=await_s3Client.GetObjectTaggingAsync(newGetObjectTaggingRequest{BucketName=record.S3.Bucket.Name,Key=record.S3.Object.Key});if(taggingResponse.Tagging.Any(tag=>tag.Key=="Compressed"&&tag.Value=="true")){Console.WriteLine($"Image {record.S3.Bucket.Name}:{record.S3.Object.Key} has already been compressed");continue;}// Get the existing image
using(varobjectResponse=await_s3Client.GetObjectAsync(record.S3.Bucket.Name,record.S3.Object.Key))using(StreamresponseStream=objectResponse.ResponseStream){Console.WriteLine($"Compressing image {record.S3.Bucket.Name}:{record.S3.Object.Key}");// Use TinyPNG to compress the image
TinyPngClienttinyPngClient=newTinyPngClient(Environment.GetEnvironmentVariable("TinyPNG_API_Key"));varcompressResponse=awaittinyPngClient.Compress(responseStream);vardownloadResponse=awaittinyPngClient.Download(compressResponse);// Upload the compressed image back to S3
using(varcompressedStream=awaitdownloadResponse.GetImageStreamData()){Console.WriteLine($"Uploading compressed image {record.S3.Bucket.Name}:{record.S3.Object.Key}");await_s3Client.PutObjectAsync(newPutObjectRequest{BucketName=record.S3.Bucket.Name,Key=record.S3.Object.Key,InputStream=compressedStream,TagSet=newList<Tag>{newTag{Key="Compressed",Value="true"}}});}}}}}

Let’s walk through the logic for the code above:

  1. The function is declared to take an S3Event as input parameter (along with an instance of ILambdaContext). The S3Event will contain information about the S3 event which triggered the function, such as uploading of a new file to a bucket. The function will process each of the records in the S3 event notification.
  2. If the S3 Object’s file extension is not in the list if valid image extensions we are interested in, then the object will be skipped.
  3. If the S3 Object is a valid image, the tags of the object will be checked for the presence of a tag named Compressed with a value of true. This will be our indicator that we have processed and compressed a particular image already. If it has been compressed, it will be skipped.
  4. At this point we have a valid, uncompressed image. So we read it from S3 into a stream.
  5. An new instance of TinyPngClient is created and we read the value for the TinyPNG API key from an environment variable named “TinyPNG_API_Key”. (This will be configured later when deploying the function.)
  6. The stream is passed to the instance of TinyPngClient to upload and compress the image using the TinyPNG API.
  7. Finally the compressed image is downloaded from TinyPNG, and then uploaded to the same S3 bucket with the same name as before (i.e. replacing the existing image). This time however we will add a tag named Compressed with a value of true to the image to ensure that it does not get processed a second time around.

Deploying the Lambda

To deploy the Lambda function to AWS, right-click on your project in the Solution Explorer window in VS 2017, and select the Publish to AWS Lambda… option. This will open the Upload Lambda Function dialog box. You can complete the information by supplying a Function Name:

Click on the Next button and complete the Advanced Lambda Settings. In the Environment section add a new Variable named TinyPNG_API_Key with the value of the API key you received from TinyPNG:

Click on the Upload button.

Testing the function

Once the Lambda function has been uploaded, you can test it. I have an existing S3 bucket with some images which I uploaded before:

If I browse the bucket in Visual Studio I can right click on the images and select the Invoke Lambda Function… option:

This will open up the Invoke Lambda Function dialog window:

Ensure that you have selected the new Lambda function and click on OK. Give it a while, and when you refresh your bucket, you should notice that the images are now considerable smaller:

Automatically invoking the function

You can also configure the function to automatically invoke every time a new image is uploaded to the S3 Bucket. To do this you can open the Lambda function’s settings inside Visual Studio and go to the Event Sources tab:

Click the Add button and in the Add Event Source dialog, select Amazon S3 as the event source, and select the S3 bucket you want to monitor. Click OK to add the event source.

Now, every time you upload a new image to that particular bucket the Lambda function will automatically be triggered and the image will be compressed.

Conclusion

In this blog post we developed a simple AWS Lambda function using C# and .NET Core. The function monitors an AWS S3 Bucket, and every time a new image is added to the bucket the image is automatically compressed using the TinyPNG API.

Source code for this blog post can be found at https://github.com/jerriepelser-blog/LambdaImageCompressor

Manually validating a JWT using .NET

$
0
0

JSON Web Tokens are commonly used to authorize request made to an API. For this purpose ASP.NET (both OWIN and Core) has middleware which allows you to easily authorize any request by ensuring the token being passed to the API is valid.

But what if you want to manually validate a token?

At Auth0 we allow signing of tokens using either a symmetric algorithm (HS256), or an asymmetric algorithm (RS256). HS256 tokens are signed and verified using a simple secret, where as RS256 use a private and public key for signing and verifying the token signatures.

See this blog post by my colleague Shawn Meyer on Navigating RS256 and JWKS for more information.

Well back to the question of validating a token, and in this case specifically a token signed using the RS256 algorithm.

The source code for the ASP.NET Core JWT middleware is available on GitHub and browsing through that gives some clues as to how you can achieve this in a non-ASP.NET application.

First up ensure that you have the following NuGet packages installed:

Install-Package System.IdentityModel.Tokens.Jwt
Install-Package Microsoft.IdentityModel.Protocols.OpenIdConnect

Since this mostly geared towards using this technique with Auth0, I am declaring some variables containing my Auth0 Domain and Audience (which with typically your Auth0 API Identifier). Depending on your scenario you may need to adapt this to suit your needs.

conststringauth0Domain="https://jerrie.auth0.com/";// Your Auth0 domain
conststringauth0Audience="https://rs256.test.api";// Your API Identifier

The first thing is to download the OIDC Configuration from the OpenID Connect Discovery endpoint. This will contain (among other things) the JSON Web Key Set containing the public key(s) that can be used to verify the token signature.

IConfigurationManager<OpenIdConnectConfiguration>configurationManager=newConfigurationManager<OpenIdConnectConfiguration>($"{auth0Domain}.well-known/openid-configuration",newOpenIdConnectConfigurationRetriever());OpenIdConnectConfigurationopenIdConfig=awaitconfigurationManager.GetConfigurationAsync(CancellationToken.None);

Next up we need to configure the token validation parameters. I specify the issuer and audience(s) and also tell it to use the signing keys - i.e. the public key(s) - which were downloaded above.

TokenValidationParametersvalidationParameters=newTokenValidationParameters{ValidIssuer=auth0Domain,ValidAudiences=new[]{auth0Audience},IssuerSigningKeys=openIdConfig.SigningKeys};

With that in place, all you need to do is validate the token:

SecurityTokenvalidatedToken;JwtSecurityTokenHandlerhandler=newJwtSecurityTokenHandler();varuser=handler.ValidateToken("eyJhbGciOi.....",validationParameters,outvalidatedToken);

ValidateToken will return a ClaimsPrincipal which will contain all the claims from the JSON Web Token.

So for example, to get the user’s ID, we can query the NameIdentifier claim:

Console.WriteLine($"Token is validated. User Id {user.Claims.FirstOrDefault(c => c.Type == ClaimTypes.NameIdentifier)?.Value}");

You can find the sample application I did for our Auth0 samples at https://github.com/auth0-samples/auth0-dotnet-validate-jwt/tree/master/IdentityModel-RS256

Preventing a UWP ListView item to be reordered

$
0
0

The UWP ListView allows you to easily reorder items inside a ListView by setting the CanReorderItems and AllowDrop properties to True.

Let’s for example take a very simple Page with a list view containing 6 buttons. Note that the CanReorderItems and AllowDrop properties are set to True:

<Pagex:Class="WorkflowDesigner.MainPage"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"xmlns:local="using:WorkflowDesigner"xmlns:d="http://schemas.microsoft.com/expression/blend/2008"xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"mc:Ignorable="d"><GridBackground="{ThemeResource ApplicationPageBackgroundThemeBrush}"><ListViewx:Name="ListView1"AllowDrop="True"CanReorderItems="True"><Button>Button 1</Button><Button>Button 2</Button><Button>Button 3</Button><Button>Button 4</Button><Button>Button 5</Button><Button>Button 6</Button></ListView></Grid></Page>

When you run the application, you are able to drag any of the items in the list view to a new position inside the list view:

But what if you do not want to allow the user to drag and reorder specific items. In the example above, let’s say that for some or other reason you do not want to allow the user to drag and reorder the 2nd button. The user can drag and reorder all other buttons around it, but that specific button, we do not want to allow the user to drag.

Alter the list view declaration to set the CanDragItems property to True, and also add an event handler for the DragItemsStarting event.

<ListViewx:Name="ListView1"DragItemsStarting="ListView1_OnDragItemsStarting"AllowDrop="True"CanReorderItems="True"CanDragItems="True"><Button>Button 1</Button><Button>Button 2</Button><Button>Button 3</Button><Button>Button 4</Button><Button>Button 5</Button><Button>Button 6</Button></ListView>

For the event handler itself, you can check any condition and then simply set the Cancel property of the event arguments to true if you want to cancel the dragging action. In the example below, I check whether any of the items being dragged is a Button with the Content“Button 2”. If it is, I cancel the drag event, otherwise I allow it.

privatevoidListView1_OnDragItemsStarting(objectsender,DragItemsStartingEventArgse){e.Cancel=e.Items.Any(o=>{if(oisButtonb&&b.Content.ToString()=="Button 2")returntrue;returnfalse;});}

Now when running the application you can see that I can drag and drop the first button, but when I attempt to drag “Button 2”, the drag operation will simply not be initiated:

BTW, if you are not familiar with the syntax o is Button b which I used above, then please check out the Pattern Matching section of the What’s New in C# 7.0 blog post, or the Roslyn feature document on Pattern Matching for C#.

Creating a Serverless Application with .NET Core, AWS Lambda and AWS API Gateway

$
0
0

Previously I gave an overview of the programming models when using NET Core with AWS Lambda, and I also showed how to create an image compressor in Lambda and C#.

This time around we’ll put together a simple Web API with a couple of endpoints which can be called from any client application. The API I’ll create will utilize NodaTime library created by Jon Skeet to return a list of time zones based on the Time Zone database.

There will be 2 endpoints:

  1. The first endpoint will be at /zones and will return a list of all time zones.
  2. The second endpoint will return only a single time zone, and the endpoint will accept a request in the format /zones/{id} where id is the ID of the time zone to return.

Create the project

As before, ensure you have downloaded and installed the Preview of the AWS Toolkit for Visual Studio 2017.

Inside Visual Studio, create a new project and select the AWS Serverless Application (.NET Core) template:

Next select the Empty Serverless Application blueprint

Ensure that you have Nodatime installed using NuGet:

Install-Package NodaTime

Return the list of time zones

First up, open the serverless.template file, and alter the default file that was created by the template. Change the name of the resource to GetAll, the Handler to point to a function called GetAllTimeZones and set the Path to /zones:

{"AWSTemplateFormatVersion":"2010-09-09","Transform":"AWS::Serverless-2016-10-31","Description":"An AWS Serverless Application.","Resources":{"GetAll":{"Type":"AWS::Serverless::Function","Properties":{"Handler":"TimeZoneService::TimeZoneService.Functions::GetAllTimeZones","Runtime":"dotnetcore1.0","CodeUri":"","MemorySize":256,"Timeout":30,"Role":null,"Policies":["AWSLambdaBasicExecutionRole"],"Events":{"PutResource":{"Type":"Api","Properties":{"Path":"/zones","Method":"GET"}}}}}},"Outputs":{}}

This will create a Lambda function called GetAll and will register the /zones endpoint in AWS API Gateway to call this function. Inside our application, the function called GetAllTimeZones will handle this request.

Please note that in the serverless.template file the full value for Handler is set to TimeZoneService::TimeZoneService.Functions::GetAllTimeZones. This will call a function in the format assembly::namespace.class::method. So in other words the handler for the function is located in the assembly TimeZoneService, namespace TimeZoneService and class Functions. The method inside that class which will handle the request is the function GetAllTimeZones.

So let’s create this function. Head over to the Functions.cs class which was created by the template. You can delete the default function which was created by the template and replace the class with the following code:

namespaceTimeZoneService{publicclassFunctions{publicAPIGatewayProxyResponseGetAllTimeZones(APIGatewayProxyRequestrequest,ILambdaContextcontext){List<TimeZoneInfo>timeZones=newList<TimeZoneInfo>();foreach(varlocationinTzdbDateTimeZoneSource.Default.ZoneLocations){timeZones.Add(GetZoneInfo(location));}varresponse=newAPIGatewayProxyResponse{StatusCode=(int)HttpStatusCode.OK,Body=JsonConvert.SerializeObject(timeZones),Headers=newDictionary<string,string>{{"Content-Type","application/json"}}};returnresponse;}privateTimeZoneInfoGetZoneInfo(TzdbZoneLocationlocation){varzone=DateTimeZoneProviders.Tzdb[location.ZoneId];// Get the start and end of the year in this zone
varstartOfYear=zone.AtStartOfDay(newLocalDate(2017,1,1));varendOfYear=zone.AtStrictly(newLocalDate(2018,1,1).AtMidnight().PlusNanoseconds(-1));// Get all intervals for current year
varintervals=zone.GetZoneIntervals(startOfYear.ToInstant(),endOfYear.ToInstant()).ToList();// Try grab interval with DST. If none present, grab first one we can find
varinterval=intervals.FirstOrDefault(i=>i.Savings.Seconds>0)??intervals.FirstOrDefault();returnnewTimeZoneInfo{TimeZoneId=location.ZoneId,Offset=interval.StandardOffset.ToTimeSpan(),DstOffset=interval.WallOffset.ToTimeSpan(),CountryCode=location.CountryCode,CountryName=location.CountryName};}}}

The GetAllTimeZones function will simply iterate through all the locations in the TZDB database. For each location it will do some calculation to determine the details of the time zone for that location, such as the offset from UTC, as well as the offset during Daylight Savings, if applicable.

Next up we can deploy the project. Right click on the project in the Solution Explorer, and select the Publish to AWS Lambda… option. The Publish AWS Serverless Application dialog will be displayed. Complete the required information, and deploy the project.

Once the project has been deployed, the URL for your serverless app will be displayed in the AWS Serverless URL field:

Go to that URL and append /zones to access the endpoint to retrieve all the time zones:

Return a single time zone

Next up, let’s create a method that will return an single time zone. For this we can create a new method called GetSingleTimeZone:

publicAPIGatewayProxyResponseGetSingleTimeZone(APIGatewayProxyRequestrequest,ILambdaContextcontext){stringtimeZoneId=null;if(request.PathParameters!=null&&request.PathParameters.ContainsKey("Id"))timeZoneId=request.PathParameters["Id"];if(!String.IsNullOrEmpty(timeZoneId)){// Url decode the TZID
timeZoneId=WebUtility.UrlDecode(timeZoneId);varlocation=TzdbDateTimeZoneSource.Default.ZoneLocations.FirstOrDefault(l=>String.Compare(l.ZoneId,timeZoneId,StringComparison.OrdinalIgnoreCase)==0);if(location!=null){returnnewAPIGatewayProxyResponse{StatusCode=(int)HttpStatusCode.OK,Body=JsonConvert.SerializeObject(GetZoneInfo(location)),Headers=newDictionary<string,string>{{"Content-Type","application/json"}}};}}returnnewAPIGatewayProxyResponse{StatusCode=(int)HttpStatusCode.NotFound};}

This method will check to see if there is a path parameter called Id which will contain the value for the time zone ID you want to retrieve. If the parameter is present it will be retrieved and URL-decoded. Then we’ll retrieve the time zone with that ID, and as before we’ll retrieve the information for the time zone.

Finally, if the Id parameter was not passed, or a time zone location with that ID was not found, then an HTTP Status 404 will be returned.

Also be sure to update your serverless.template file and add the settings for the new endpoint. Add a resource called GetSingle which will call the GetSingleTimeZone method. For the path you can specify /zones/{Id}. This will ensure that the path parameter called Id will be passed through to the method, which you remember we retrieved from the PathParameters collection.

NB: The name of the parameter is case-sensitive so be sure that if you specify the parameter name as Id in the serverless.template file that you check for exactly that name in your C# code.

{"AWSTemplateFormatVersion":"2010-09-09","Transform":"AWS::Serverless-2016-10-31","Description":"An AWS Serverless Application.","Resources":{"GetAll":{...},"GetSingle":{"Type":"AWS::Serverless::Function","Properties":{"Handler":"TimeZoneService::TimeZoneService.Functions::GetSingleTimeZone","Runtime":"dotnetcore1.0","CodeUri":"","MemorySize":256,"Timeout":30,"Role":null,"Policies":["AWSLambdaBasicExecutionRole"],"Events":{"PutResource":{"Type":"Api","Properties":{"Path":"/zones/{Id}","Method":"GET"}}}}}},"Outputs":{}}

Deploy the application as before, and make a request to the /zones endpoint, this time passing the ID for the single time zone you want to retrieve. Note that the time zone ID can contain a slash (/), so it needs to be URL encoded otherwise it will be interpreted as a path separator.

Source Code

The source code is available at https://github.com/jerriepelser-blog/aws-dotnet-serverless-app

Viewing all 317 articles
Browse latest View live