Archive for the ‘Technical’ Category

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

In my previous post (http://zimmergren.net/technical/sp-2013-some-new-delegatecontrol-additions-to-the-sharepoint-2013-master-pages) I talked about how you could use the new delegate controls in the master page (seattle.master) to modify a few things in the SharePoint UI, including the text in the top left corner saying "SharePoint". If your goal is simply to change the text, or hardcode a link without the need for any code behind, you could do it even easier with PowerShell.

Changing the SharePoint text to something else using PowerShell

Before:

image

After:

image

PowerShell Snippet

$webApp = Get-SPWebApplication http://tozit-sp:2015
$webApp.SuiteBarBrandingElementHtml = "Awesome Text Goes Here"
$webApp.Update()

 

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

In this post we’ll take a quick look at some of the new DelegateControls I’ve discovered for SharePoint 2013 and how you can replace or add information to your new master pages using these new controls, without modifying the master pages. This is done exactly the same way as you would do it back in the 2010 projects (and 2007), the only addition in this case are a few new controls that we’ll investigate.

New DelegateControls

Searching through the main master page, Seattle.master, I’ve found these three new DelegateControls:

  • PromotedActions
  • SuiteBarBrandingDelegate
  • SuiteLinksDelegate

So let’s take a look at where these controls are placed on the Master page and how we can replace them.

PromotedActions Delegate Control

The PromotedActions delegate control allows you to add your own content to the following area on a SharePoint site in the top-right section of the page:

image

An example of adding an additional link may look like this:

image

So what does the files look like for these parts of the project?

Elements.xml

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
    
  <!-- DelegateControl reference to the PromotedActions Delegate Control -->
  <Control ControlSrc="/_controltemplates/15/Zimmergren.DelegateControls/PromotedAction.ascx"
           Id="PromotedActions"
           Sequence="1" />
  
</Elements>

PromotedActions.aspx (User Control)

<!-- Note: I've removed the actual Facebook-logic from this snippet for easier overview of the structure. -->
<a title="Share on Facebook" class="ms-promotedActionButton" style="display: inline-block;" href="#">
    <span class="s4-clust ms-promotedActionButton-icon" style="width: 16px; height: 16px; overflow: hidden; display: inline-block; position: relative;">
        <img style="top: 0px; position: absolute;" alt="Share" src="/_layouts/15/images/Zimmergren.DelegateControls/facebookshare.png"/>
    </span>
    <span class="ms-promotedActionButton-text">Post on Facebook</span>
</a>

SuiteBarBrandingDelegate Delegate Control

This DelegateControl will allow you to override the content that is displayed in the top-left corner of every site. Normally, there’s a text reading "SharePoint" like this:

image

If we override this control we can easily replace the content here. For example, most people would probably like to add either a logo or at least make the link clickable so you can return to your Site Collection root web. Let’s take a look at what it can look like if we’ve customized it (this is also a clickable logo):

image

So what does the files look like for this project?

Elements.xml

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
    
  <!-- SuiteBarBrandingDelegate (the top-left "SharePoint" text on a page) -->
  <Control ControlSrc="/_controltemplates/15/Zimmergren.DelegateControls/SuiteBarBrandingDelegate.ascx"
           Id="SuiteBarBrandingDelegate"
           Sequence="1" />
  
</Elements>

SuiteBarBrandingDelegate.ascx (User Control)

This is the only content in my User Control markup:

<div class="ms-core-brandingText" id="BrandingTextControl" runat="server" />

SuiteBarBrandingDelegate.ascx.cx (User Control Code Behind)

protected void Page_Load(object sender, EventArgs e)
{
    BrandingTextControl.Controls.Add(new Literal
    {
        Text = string.Format("<a href='{0}'><img src='{1}' alt='{2}' /></a>", 
        SPContext.Current.Site.Url,
        "/_layouts/15/images/Zimmergren.DelegateControls/tozit36light.png",
        SPContext.Current.Site.RootWeb.Title)
    });
}

SuiteLinksDelegate Delegate Control

The SuiteLinksDelegate control will allow us to modify the default links, and to add our own links, in the "suit links" section:

image

By adding a custom link to the collection of controls, it can perhaps look like this:

image

What does the project files look like for modifying the SuiteLinksDelegate? Well, here’s an example:

Elements.xml

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  
  <!-- DelegateControl reference to the SuiteLinksDelegate Delegate Control -->
  <Control ControlSrc="/_controltemplates/15/Zimmergren.DelegateControls/SuiteLinksDelegate.ascx"
           Id="SuiteLinksDelegate"
           Sequence="1" />
  
</Elements>

 

SuiteLinksDelegate.aspx.cs (User Control Code Behind)

public partial class SuiteLinksDelegate : MySuiteLinksUserControl
{
    protected override void Render(HtmlTextWriter writer)
    {
        writer.RenderBeginTag(HtmlTextWriterTag.Style);
        writer.Write(".ms-core-suiteLinkList {display: inline-block;}");
        writer.RenderEndTag();
        writer.AddAttribute(HtmlTextWriterAttribute.Class, "ms-core-suiteLinkList");
        writer.RenderBeginTag(HtmlTextWriterTag.Ul);
            
        // The true/false parameter means if it should be the active link or not - since I'm shooting off this to an external URL, it will never be active..
        RenderSuiteLink(writer, "http://timelog.tozit.com", "Time Report", "ReportYourTimeAwesomeness", false);

        writer.RenderEndTag();
        base.Render(writer);
    }
}

Solution overview

For reference: I’ve structured the project in a way where I’ve put all the changes into one single Elements.xml file and they’re activated through a Site Scoped feature called DelegateControls. The solution is a Farm solution and all artifacts required are deployed through this package.

image

Summary

In this post we’ve looked at how we can customize some of the areas in a SharePoint site without using master page customizations. We’ve used the good-old approach of hooking up a few Delegate Control overrides to our site collection. Given the approach of Delegate Controls, we can easily just de-activate the feature and all our changes are gone. Simple as that.

In SharePoint 2013 we can still do Delegate Control overrides just like we did back in 2007 and 2010 projects, and it’s still pretty slick. I haven’t investigated any other master pages other than the Seattle.master right now – perhaps there’s more new delegate controls somewhere else. Let’s find out..

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

In one of my previous posts I talked about "Using the SPField.JSLink property to change the way your field is rendered in SharePoint 2013". That article talks about how we can set the JSLink property in our SharePoint solutions and how we easily can change the rendering of a field in any list view.

It’s pretty slick and I’ve already had one client make use of the functionality in their pilot SharePoint 2013 farm. However I got a comment from John Lui asking about what the performance would be like when executing the same iterative code over thousands of items. Since I hadn’t done any real tests with this myself, I thought it’d be a good time to try and pinpoint and measure if the client-side load times differ while using the JSLink property.

Measurements

The tests will be performed WITH the JSLink property set and then all tests will be executed WITHOUT the JSLink property set.

I’ve set up my scenario with various views on the same list. The total item count of the list is 5000 items, but we’ll base our tests on the limit of our views:

Test 1  
View Limit 100
Test 2  
View Limit 500

The results will be output with a difference between using the JSLink property and not using it. Should be fun.

Tools

I’be been using these tools for measuring the performance on the client:

Fiddler4, YSlow, IE Developer Tools

The code that will be executed will be the same JavaScript code as I had in my previous article.

SharePoint 2013 browser compatibility

If you intend to measure performance on various browsers, make sure you’ve pinned down what browsers are actually supported by SharePoint 2013. The following versions of browsers are supported:

  • IE 10, IE 9, IE 8
  • Chrome
  • Safari
  • Firefox

Let’s start measuring

Let’s start by taking a look at how I’ve done my tests and what the results were.

Testing methods

Each test was executed 100 times and the loading time result I’ve used for each test is the average of the 100 attempts. These tests are executed in a virtual environments so obviously the exact timings will differ from your environments – what’s mostly interesting right here though is the relative difference between having JSLink set and not set when rendering a list view, so that’s what we’ll focus on.

I’ve performed these tests on IE 10 and the latest version of Firefox. Since older browsers may handle scripts in a less efficient way than these two versions of the browsers, you may experience different results when using for example IE 8.

Results overview

SharePoint 2013 is pretty darn fantastic in its way that it renders the contents and pages. The measurements I’ve done here are based on the entire page and all contents to load. The chrome of the page (Navigation/Headers etc) loads instantly, literally in less than 25ms but the entire page takes longer since the content rendered for the list view will take considerably longer. Here’s the output…

Using 100 Item Limit in the View

image

Difference: 969 milliseconds

Conclusion

There’s not really much to argue about with the default 100-item limit. There’s a difference on almost one second, which is pretty bad to be honest. I would definitely revise these scripts and optimize the performance if I wanted quicker load times. However, if I changed the scripts and removed the rendering of images and used plain text instead, there was very little difference. So I guess it comes down to what you actually put into those scripts and how you optimize your JavaScript.

Using 500 Item Limit in the View

image

Difference: 529 milliseconds

Conclusion

The load times are naturally longer when returning 500 items, but the difference was smaller on a larger result set. I also performed the same tests using 1000 item limit in the view, and the difference per page load was between 500ms to 1000ms, essentially the same as these two tests. If your page takes 7-8 seconds to load without the usage of JS Link like these samples did in the virtual environments, I’d probably focus on fixing that before being too concerned about the impact the JS Link rendering process will have on your page. However, be advised that if you put more advanced logic into the scripts it may very well be worth your while to draft up some tests for it.

Things to take into consideration

  • The sample script here only replaces some strings based on the context object and replaces with an image. No heavy operations.
  • Replacing strings with images took a considerably longer time to render than just replacing text and render. Consider the code you put in your script and make sure you’ve optimized it for performance and scope your variables properly and so on.
  • Take your time to learn proper JavaScript practices. It’ll be worth it in the end if you’re going to do a lot of client side rendering stuff down the road.
  • If you’ve got access to Scot Hillier’s session from SPC12, review them!

Summary

Its not very often I’ve seen anyone use 1000 items as the item limit per view in an ordinary List View Web Part. Most of my existing configurations are using 100 or less (most likely around 30) items per page for optimal performance – however should you have larger views you should of course consider the impact the rendering will have if you decide to hook up your own custom client side rendering awesomeness.

You’ll notice the biggest difference between page load times if you’ve got a smaller item row limit in your view, simply because it looks like using the custom JS link property adds between 500 – 1000 milliseconds whether if I’m returning 100 items, 500 items or 2500 items in my view. Worth considering.

With that said – It’s a pretty cool feature and I’ve already seen a lot of more use cases for some of my clients to utilize these types of customizations. It’s a SUPER-AWESOME way to customize the way your list renders data instead of converting your List View Web Part (or Xslt List View Web Parts and so on) into Data View Web Parts like some people did with SharePoint Destroyer.. Err.. SharePoint Designer. For me as a developer/it/farm admin guy this will make upgrades easier as well (hopefully) as the list itself will be less customized, and only load an external script in order to make the customizations appear. Obviously I’m hoping for all scripts to end up in your code repositories with revision history, fully documented and so on – but then again I do like to dream :-)

Enjoy.

SP 2013: Searching in SharePoint 2013 using the REST new API’s

December 26th, 2012 by Tobias Zimmergren

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

Search has always been a great way to create custom solutions that aggregate and finds information in SharePoint. With SharePoint 2013, the search capabilities are heavily invested in and we see a lot of new ways to perform our searches. In this post I’ll show a simple example of how you can utilize the REST Search API’s in SharePoint 2013 to perform a search.

REST Search in SharePoint 2013

So in order to get started, we’ll need to have an ASPX Page containing some simple markup and a javascript file where we’ll put our functions to be executed. The approach mentioned in this post is also compatible with SharePoint Apps, should you decide to develop an App that relys on Search. In my example I’ve created a custom Page which loads my jQuery and JavaScript files, and I’m deploying those files using a Module to the SiteAssets library.

Preview of the simple solution

image

Creating a Search Application using REST Search Api’s

Let’s examine how you can construct your search queries using REST formatting and by simply changing the URL to take in the proper query strings.

Formatting the Url

By formatting the Url properly you can retrieve search results pretty easily using REST.

Query with querytext

If you simply want to return all results with no limits or filters from a search, shoot out this formatted url:

http://tozit.dev/_api/search/query?querytext=’Awesome’

will yield the following result:

image_thumb[1]

Essentially we’re getting a bunch of XML returned from the query which we then have to parse and handle somehow. We can then use the result of this query in our application in whatever fashion we want. Let’s see what a very simple application could look like, utilizing the SharePoint 2013 rest search api’s.

ASP.NET Markup Snippet

As we can see in the following code snippet, we’re simply loading a few jQuery and JavaScript files that we require and we define a Search Box and a Button for controlling our little Search application.

<!-- Load jQuery 1.8.3 -->
<script type="text/javascript" src="/SiteAssets/ScriptArtifacts/jquery-1.8.3.min.js"></script>

<!-- Load our custom Rest Search script file -->
<script type="text/javascript" src="/SiteAssets/ScriptArtifacts/RestSearch.js"></script>

<!-- Add a Text Box to use as a search box -->
<input type="text" value="Awesome" id="searchBox" />

<!-- Add a button that will execute the search -->
<input type="button" value="Search" onclick="executeSearch()" />

<div id="searchResults"></div>

JavaScript logic (RestSearch.js)

I’ve added a file to the project called RestSearch.js, which is a custom javascript file containing the following code which will perform an ajax request to SharePoint using the Search API’s:

// in reality we should put this inside our own namespace, but this is just a sample.
var context;
var web;
var user;

// Called from the ASPX Page
function executeSearch()
{
    var query = $("#searchBox").val();

    SPSearchResults =
    {
        element: '',
        url: '',

        init: function(element) 
        {
            SPSearchResults.element = element;
            SPSearchResults.url = _spPageContextInfo.webAbsoluteUrl + "/_api/search/query?querytext='" + query + "'";
        },

        load: function() 
        {
            $.ajax(
                {
                    url: SPSearchResults.url,
                    method: "GET",
                    headers:
                    {
                            "accept": "application/json;odata=verbose",
                    },
                    success: SPSearchResults.onSuccess,
                    error: SPSearchResults.onError
                }
            );
        },

        onSuccess: function (data)
        {
            var results = data.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results;

            var html = "<div class='results'>";
            for (var i = 0; i < results.length; i++)
            {
                var d = new Date(results[i].Cells.results[8].Value);
                var currentDate = d.getFullYear() + "-" + d.getMonth() + "-" + d.getDate() + " " + d.getHours() + ":" + d.getMinutes();

                html += "<div class='result-row' style='padding-bottom:5px; border-bottom: 1px solid #c0c0c0;'>";
                var clickableLink = "<a href='" + results[i].Cells.results[6].Value + "'>" + results[i].Cells.results[3].Value + "</a><br/><span>Type: " + results[i].Cells.results[17].Value  + "</span><br/><span>Modified: " + currentDate + "</span>";
                html += clickableLink;
                html += "</div>";
            }

            html += "</div>";
            $("#searchResults").html(html);
        },

        onError: function (err) {
            $("#searchResults").html("<h3>An error occured</h3><br/>" + JSON.stringify(err));
        }
    };

    // Call our Init-function
    SPSearchResults.init($('#searchResults'));

    // Call our Load-function which will post the actual query
    SPSearchResults.load();
}

The aforementioned script shows some simple javascript that will call the Search REST API (the "_api/…" part of the query) and then return the results in our html markup. Simple as that.

Summary

By utilizing the REST Search API we can very quickly and easily create an application that searches in SharePoint 2013.

This can be implemened in SharePoint Apps, Sandboxed Solutions or Farm Solutions. Whatever is your preference and requirements, the Search API’s should be easy enough to play around with.

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

So just a simple tip in case anyone bumps into the same issue as I had a while back. Going from Beta to RTM, some things changed in the way you retrieve values using REST in the SharePoint 2013 client object model.

Description & Solution

In the older versions of the object model, you could simply use something like this in your REST call:

$.ajax(
    {
        url: SPSearchResults.url,
        method: "GET",
        headers:
        {
                "accept": "application/json",
        },
        success: SPSearchResults.onSuccess,
        error: SPSearchResults.onError
    }
);

As you can see the "headers" is specifying "application/json".

The fix is simply to swap that statement into this:

$.ajax(
    {
        url: SPSearchResults.url,
        method: "GET",
        headers:
        {
                "accept": "application/json;odata=verbose",
        },
        success: SPSearchResults.onSuccess,
        error: SPSearchResults.onError
    }
);

And that’s a wrap.

Summary

I hope this can save someone a few minutes (or more) of debugging for using old example code or ripping up older projects. I’ve found that a lot of examples online, based on the beta of SharePoint 2013, are using the older version of the headers-statement which inevitably will lead to this problem. So with that said, enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

Recently I’ve had the pleasure (and adventure..) of upgrading a few SharePoint 2010 solutions to SharePoint 2013. One of the things that come up in literally every project I’m involved in, is the ability to quickly and easily change how a list is rendered. More specifically how the fields in that list should be displayed.

Luckily in SharePoint 2013, Microsoft have extended the SPField with a new proprety called JSLink which is a JavaScript Link property. There’s a JSLink property on the SPField class as well as a "JS Link" property on for example List View Web Parts. If we specify this property and point to a custom JavaScript file we can have SharePoint render our fields in a certain way. We can also tell for example our List View Web Parts to point to a specific JavaScript file, so they’re also extended with the JS Link property.

In this blog post I’ll briefly explain what the "JS Link" for a List View Web Part can look like and how you can set the property using PowerShell, C# and in the UI. I’ll also mention ways to set the SPField JSLink property, should you want to use that.

Final results

If you follow along with this article you should be able to render a similar result to this view in a normal Task List in SharePoint 2013:

image

You can see that the only modification I’ve made right now is to display an icon displaying the importance of the task. Red icons for High Priority and Blue and Yellow for Low and Medium.

Since it’s all based on JavaScript and we’re fully in control of the rendering, we could also change the rendering to look something like this, should we want to:

image

As you’ve may have noticed I haven’t put down a lot of effort on styling these elemenbts – but you could easily put some nicer styling on these elements through the JavaScript either by hooking up a CSS file or by using inline/embedded styles.

Configuring the JSLink properties

Okay all of what just happened sounds cool and all. But where do I actually configure this property?

Set the JS Link property on a List View Web Part

If you just want to modify an existing list with a custom rendering template, you can specify the JSLink property of any existing list by modifying it’s Web Part Properties and configure the "JS Link" property, like this:

image

If you configure the aforementioned property on the List View Web Part your list will automatically load your custom JavaScript file upon rendering.

Set the SPField.JSLink property in the Field XML definition

If you are creating your own field, you can modify the Field XML and have the property set through the definition like this:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">  
  <Field
       ID="{43001095-7db7-4219-9df9-b4b0f281530a}"
       Name="My Awesome Sample Field"
       DisplayName="My Awesome Sample Field"
       Type="Text"
       Required="FALSE"
       JSLink="/_layouts/15/Zimmergren.JSLinkSample/Awesomeness.js"
       Group="Blog Sample Columns">
  </Field>
</Elements>

Set the SPField.JSLink property using the Server-Side Object Model

Simply set the SPField.JSLink property like this. Please note that this code was executed from a Console Application, hence the instantiation of a new SPSite object:

using (SPSite site = new SPSite("http://tozit-sp:2015"))
{
    SPWeb web = site.RootWeb;
    SPField taskPriorityField = web.Fields["Priority"];

    taskPriorityField.JSLink = "/_layouts/15/Zimmergren.JSLinkSample/AwesomeFile.js";
    taskPriorityField.Update(true);
}

Set the SPField.JSLink property using PowerShell

If you’re the PowerShell-kind-of-guy-or-gal (and you should be, if you’re working with SharePoint…) then you may find the following simple snippets interesting, as they should come in handy soon enough.

PowerShell: Configure the JSLink property of an SPField

$web = Get-SPWeb http://tozit-sp:2015
$field = $web.Fields["Priority"]
$field.JSLink = "/_layouts/15/Zimmergren.JSLinkSample/Awesomeness.js"
$field.Update($true)

PowerShell: Configure the JSLink property of a Web Part

Note that this is what I’ve been doing in the sample code later on – I’m not setting a custom JSLink for the actual SPField, I’m setting it for the List View Web Part.

$web = Get-SPWeb http://tozit-sp:2015

$webPartPage = "/Lists/Sample%20Tasks/AllItems.aspx"
$file = $web.GetFile($webPartPage)
$file.CheckOut()

$webPartManager = $web.GetLimitedWebPartManager($webPartPage, [System.Web.UI.WebControls.WebParts.Pers
onalizationScope]::Shared)

$webpart = $webPartManager.WebParts[0]

$webpart.JSLink = "/_layouts/15/Zimmergren.JSLinkSample/Awesomeness.js"

$webPartManager.SaveChanges($webpart)

$file.CheckIn("Awesomeness has been delivered")

As you can see in the PowerShell snippet above we’re picking out a specific page (a Web Part page in a list in my case) and picks out the first Web Part (since I know I only have one Web Part on my page) and set the JS Link property there. This will result in the Web Part getting the proper link set and can now utilize the code in your custom javascript file to render the results.

So what does the JS Link JavaScript logic look like?

Okay, so I’ve seen a few ways to modify the JS Link property of a list or a field. But we still haven’t looked at how the actual JavaScript works or what it can look like. So let’s take a quick look at what it could look like for a List View Web Part rendering our items:

// Create a namespace for our functions so we don't collide with anything else
var zimmergrenSample = zimmergrenSample || {};

// Create a function for customizing the Field Rendering of our fields
zimmergrenSample.CustomizeFieldRendering = function ()
{
    var fieldJsLinkOverride = {};
    fieldJsLinkOverride.Templates = {};

    fieldJsLinkOverride.Templates.Fields =
    {
        // Make sure the Priority field view gets hooked up to the GetPriorityFieldIcon method defined below
        'Priority': { 'View': zimmergrenSample.GetPriorityFieldIcon }
    };

    // Register the rendering template
    SPClientTemplates.TemplateManager.RegisterTemplateOverrides(fieldJsLinkOverride);
};

// Create a function for getting the Priority Field Icon value (called from the first method)
zimmergrenSample.GetPriorityFieldIcon = function (ctx) {
    var priority = ctx.CurrentItem.Priority;

    // In the following section we simply determine what the rendered html output should be. In my case I'm setting an icon.

    if (priority.indexOf("(1) High") != -1) {
        //return "<div style='background-color: #FFB5B5; width: 100%; display: block; border: 2px solid #DE0000; padding-left: 2px;'>" + priority + "</div>";
        return "<img src='/_layouts/15/images/Zimmergren.JSLinkSample/HighPrioritySmall.png' />&nbsp;" + priority;
    }

    if (priority.indexOf("(2) Normal") != -1) {
        //return "<div style='background-color: #FFFFB5; width: 100%; display: block; border: 2px solid #DEDE00; padding-left: 2px;'>" + priority + "</div>";
        return "<img src='/_layouts/15/images/Zimmergren.JSLinkSample/MediumPrioritySmall.png' />&nbsp;" +priority;
    }

    if (priority.indexOf("(3) Low") != -1) {
        //return "<div style='background-color: #B5BBFF; width: 100%; display: block; border: 2px solid #2500DE; padding-left: 2px;'>" + priority + "</div>";
        return "<img src='/_layouts/15/images/Zimmergren.JSLinkSample/LowPrioritySmall.png' />&nbsp;" + priority;
    }

    return ctx.CurrentItem.Priority;
};

// Call the function. 
// We could've used a self-executing function as well but I think this simplifies the example
zimmergrenSample.CustomizeFieldRendering();

With the above script, we’ve simply told our field (Priority) that when it’s rendered it should look format the output HTML according to the contents in my methods. In this case we’re simply making a very simple replacement of text with image to visually indicate the importance of the task.

For examples of how you can construct your SPField.SPLink JavaScript, head on over to Dave Mann’s blog and check it out. Great info!

Summary

With a few simple steps (essentially just a JavaScript file and a property on the LVWP or Field) we’ve changed how a list is rendering its data. I’d say that the sky is the limit and I’ve already had one of my clients implement a solution using a custom JS Link to format a set of specific lists they have. What’s even better is that it’s so simple to do, we don’t even have to do a deployment of a new package if we don’t want to.

The reason I’ve chosen to do a Farm Solution (hence the /_layouts paths you see in the url’s) is that most of my clients still run Farm Solutions – and will be running them for a long time to come. And it wraps up a nice package for us to deploy globally in the farm, and then simply have a quick PowerShell script change the properties of the LVWP’s we want to modify and that’ll be that. Easy as 1-2-3.

Enjoy.

Introduction

As most if not all of you already know, SharePoint 2013 and Office 2013 has been released to preview/beta and is available for download from Microsoft’s download centers. In this article I will briefly introduce some exciting changes that has been made to the SharePoint 2013 Business Connectivity Services framework. I’ve written a bunch of articles on BCS for SharePoint 2010, and now it’s time to continue that track and introduce the new features available in SharePoint 2013.

Please note that this article is written for SharePoint 2013 Preview, and for the final version of SharePoint some details may have changed

  1. SharePoint 2013: Business Connectivity Services – Introduction
  2. SharePoint 2013: Business Connectivity Services – Consuming OData in BCS Using an App External Content Type
  3. SharePoint 2013: Business Connectivity Services – Talking to your external lists using REST
  4. SharePoint 2013: Business Connectivity Services – Client Object Model
  5. SharePoint 2013: Business Connectivity Services – Events and Alerts

SharePoint 2013 – BCS with REST

In this article we’ll be taking a further look on the new Business Connectivity Services changes and additions in SharePoint 2013. I want to take a quick look and introduce how REST (Representational State Transfer) can be used and also how the Client Object Model can be utilized to communicate with your External Lists and BCS Entities.

So without further delays, let’s dig into the fantastic world of BCS and CSA (Client Side Awesomeness).

Important: The project type in this project, as mentioned in the post where we started building our sample, is a SharePoint-hosted app.

Business Connectivity Services and REST

Utilizing REST in BCS in your SharePoint 2013 environments isn’t really that big of a deal. It’s pretty straight forward. The first thing you should do is take a look at the following link which references the MSDN article about getting started with REST in SharePoint 2013.

MSDN Reference: http://tz.nu/QXfooY

So now that we know that REST is all about, we’re going to look at some code that will utilize REST to retrieve some data. In this sample I’ll continue to build on the solution I created in the previous article.

Final result will look like this

When our first snippet of code will be done, it’ll look something like this where our (in this case) App (page) is loaded:

image

So what the code will do is to pull out the images and links for the videos in the Telerik.Tv OData data source that we created in the previous article, and we’ll be doing this using the REST API’s.

In one big snippet, here’s what the REST call looks like in my App.js:

var context;
var web;
var user;

function sharePointReady()
{
    var requestUri = _spPageContextInfo.webAbsoluteUrl + "/_api/lists/getbytitle('Video')/items";
    jqxhr = $.getJSON(requestUri, null, onSuccessfullJSONCall);
    jqxhr.error(onErrorInJSONCall);
}

function onSuccessfullJSONCall(data) {

    var outputHtml = "";
    var results = data.d.results;
    var counter = 0;
    for (var i = 0; i < results.length; i++) {
        outputHtml += "<img src='" + results[i].ImageUrl + "' class='brick-image' />";
        counter++;

        if (counter >= 4) {
            outputHtml += "<br/>";
            counter = 0;
        }
    }

    $("#message").html(outputHtml);
}

function onErrorInJSONCall(err) {
    $("#message").text("Unawesome Error: " + JSON.stringify(err));
}

As you can see in the aforementioned code snippet, the calls to the REST API’s are pretty straight forward and I’m only making a quick call to fetch the data asynchronously using the $.getJSON() method.

Lets break it down in sections..

Script part 1: sharePointReady() method call

Since our project type (a SharePoint-hosted App) contains a stub for the Default.aspx page and the App.js files – a method called sharePointReady is defined in the App.js file and in the Default.aspx you have the following block which initiates the call to this method upon loading all other required awesome things:

    <!-- The following script runs when the DOM is ready. The inline code uses a SharePoint feature to ensure -->
    <!-- The SharePoint script file sp.js is loaded and will then execute the sharePointReady() function in App.js -->
    <script type="text/javascript">
        $(document).ready(function () {
            SP.SOD.executeFunc('sp.js', 'SP.ClientContext', function () { sharePointReady(); });
        });
    </script>

(Optionally, you can put this code-block in the actual App.js file as well if you’d like – it’s up to you)

When SharePoint is done loading all the required stuff that it needs to function (sp.js), this method is executed. In this sample, I’m making a simple JSON call to the REST API using the syntax http://url/_api/lists/getbytitle(‘ListTitle’)/items which will get all list items in the list named Video:

function sharePointReady()
{
    var requestUri = _spPageContextInfo.webAbsoluteUrl + "/_api/lists/getbytitle('Video')/items";
    jqxhr = $.getJSON(requestUri, null, onSuccessfullJSONCall);
    jqxhr.error(onErrorInJSONCall);
}

Script part 2: On successful JSON callback, the following block is executed

We’re executing the onSuccessfullJSONCall() when our previous call is successful, and it’ll basically just parse the JSON result and push the items out in a simple HTML structure:

function onSuccessfullJSONCall(data) {

    var outputHtml = "";
    var results = data.d.results;
    var counter = 0;
    for (var i = 0; i < results.length; i++) {
        outputHtml += "<img src='" + results[i].ImageUrl + "' class='brick-image' />";
        counter++;

        if (counter >= 4) {
            outputHtml += "<br/>";
            counter = 0;
        }
    }

    $("#message").html(outputHtml);
}

Script part 3: In case of unawesomeness (failure)

In case the request failed, we’ll be handling that somehow in this method. In this case we’ll just be printing out the error message very quickly to the user:

function onErrorInJSONCall(err) {
    $("#message").text("Unawesome Error: " + JSON.stringify(err));
}

Summary

In this article we took a very quick look at how to get that first REST call working with SharePoint 2013 against our External List called “video”. With the returned result in JSON, we’re parsing it out and simply rendering the result in a more reader-friendly manner as pictures in a simple HTML grid.

So that’ll be it for the REST calls for now. It should get you started and set-up for creating a very simple application that utilizes REST for retrieving data from SharePoint.

More details on working with the REST API’s in SharePoint 2013 will follow later, including how to perform cross-domain queries.

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

As most if not all of you already know, SharePoint 2013 and Office 2013 has been released to preview/beta and is available for download from Microsoft’s download centers. In this article I will briefly introduce some exciting changes that has been made to the SharePoint 2013 Business Connectivity Services framework. I’ve written a bunch of articles on BCS for SharePoint 2010, and now it’s time to continue that track and introduce the new features available in SharePoint 2013.

Please note that this article is written for SharePoint 2013 Preview, and for the final version of SharePoint some details may have changed

  1. SharePoint 2013: Business Connectivity Services – Introduction
  2. SharePoint 2013: Business Connectivity Services – Consuming OData in BCS Using an App External Content Type
  3. SharePoint 2013: Business Connectivity Services – Talking to your external lists using REST
  4. SharePoint 2013: Business Connectivity Services – Client Object Model
  5. SharePoint 2013: Business Connectivity Services – Events and Alerts

SharePoint 2013 – BCS with OData

In this specific article we’ll be looking on how you can consume data from an Open Data protocol (OData) data source. Tag along for the ride!

First of all, tell me what OData is…!?

Right, so OData is a  Web protocol for performing CRUD (Create Read Update and Delete) operations and utilizes technologies like HTTP, ATOM and JSON.

As quoted from the OData Web Site:

The Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today.
OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores.

We can have lengthy discussions about how awesome (or unawesome) OData is but in this article we’ll be focusing on consuming OData data sources in SharePoint 2013 through Business Connectivity Services, one of my favorite parts of SharePoint.

Launch Visual Studio and get started!

So, as you probably could tell by the title of this post – you should now open Visual Studio 2012 :-)

Create the project

Choose the project type App for SharePoint 2013:

SharePoint 2013 bcs odata app

The Wizard that launches will give you a few choices which we’ll discuss in another article. In this article I’ve selected the “SharePoint Hosted” alternative, as you can see below:

SharePoint 2013 bcs odata app

This should get you setup with the initial project structure, and you should now have a project that looks something like this:

SharePoint 2013 bcs odata app project

Add an external content type

Now that we’ve created our initial project, we should add a new external content type. If you right-click your project and choose Add, you can see that there’s a new menu item called “Content Types for an External Data Source…“. Click it:

SharePoint 2013 bcs content type

In the following wizard, you can enter the service URL for the OData provider you’d like to consume. In my example I’ve used the public tv.telerik.com/services/odata.svc OData service:

Telerik OData in SharePoint 2013

In the next step in the wizard, you get to select for which data entities you want to generate external content types. Choose the entities you’d like to work with and continue by pressing Finish:
Please note that you should leave the “Create list instances for the selected data entries (except Service Operations).” checked so the tools can create your external list automatically!

SharePoint 2013 BCS Entity

After we’ve completed this step, we can see a few additions in our project that Visual Studio 2012 has been so kind to help us out with.

A Telerik Video external content type with it’s associated ListInstances (one per entity, in my case only one for Video) that it was also kind enough to create for us since I had the checkbox in the previous dialog ticked:

image

Looking in your Feature explorer, you can see that the default Feature now contains your artifacts along with the newly created List Instance and External Content Type:

image

In the newly created List Instances, the following XML has been generated for us:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <ListInstance Url="Lists/Video" Description="Video" OnQuickLaunch="TRUE" Title="Video">
    <DataSource>
      <Property Name="LobSystemInstance" Value="Telerik OData" />
      <Property Name="EntityNamespace" Value="TelerikTvDataServiceDataServiceModels" />
      <Property Name="Entity" Value="Video" />
      <Property Name="SpecificFinder" Value="ReadSpecificVideo" />
      <Property Name="MetadataCatalogFileName" Value="BDCMetadata.bdcm" />
    </DataSource>
  </ListInstance>
</Elements>

As you can see, the ListInstance DataSource node contains information about connecting to our newly created BCS entity. Perfect.

Just to make sure that we can deploy our solution, hit F5 and make sure that VS2012 deploys your project properly

Validate that your application works

Navigate to your site and validate that your new App is properly deployed and showing:

SharePoint 2013 app

Click the app and make sure you can see the result of your very basic and “hello world”-ish app:

image

Display your new external list!

Do you remember in one of the previous steps that I mentioned that you should leave the checkbox ticked in a dialog, saying “Create list instances for the selected data entries (except Service Operations).”? Well the reason for that is that we want Visual Studio 2012 tools to create this list for us so we don’t have to do that ourselves.

With the newly deployed BCS external content type app, with it’s created list, you can access the content of the list on the following URL (you’ll need to check your Elements.xml for your ListInstance element in order to find out your url:

http://yourappurl/ZimmergrenSP2013BCSOData/Lists/Video

Navigate to this URL and you’ll see this view:

SharePoint 2013 BCS OData Telerik Tv

So that’s pretty cool and easy right? Straight from the XML-formatted OData source we’ve pulled some info into our App in SharePoint.

Summary

As you can see, working with SharePoint 2013 and the OData model with BCS is pretty straight forward. In this example I created an App for SharePoint utilizing the BCS framework to connect to an OData data source. I’m pretty impressed with the options this opens up for many of our existing SharePoint 2010 projects that sometime in the future will be upgraded to SharePoint 2013.

In the next few posts we’ll dive more into the news around BCS for SharePoint 2013.

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

As most if not all of you already know, SharePoint 2013 and Office 2013 has been released to preview/beta and is available for download from Microsoft’s download centers. In this article I will briefly introduce some exciting changes that has been made to the SharePoint 2013 Business Connectivity Services framework. I’ve written a bunch of articles on BCS for SharePoint 2010, and now it’s time to continue that track and introduce the new features available in SharePoint 2013.

At the time of this writing, I’m looking into details about upgrading a SharePoint 2010 solution to SharePoint 2013 for one of my clients who rely heavily on BCS – and the new features in 2013 are pretty slick – so I’ll be documenting and writing about some of the new enhancements for developers in this series!

Please note that this article is written for SharePoint 2013 Preview, and for the final version of SharePoint some details may have changed

  1. SharePoint 2013: Business Connectivity Services – Introduction
  2. SharePoint 2013: Business Connectivity Services – Consuming OData in BCS Using an App External Content Type
  3. SharePoint 2013: Business Connectivity Services – Talking to your external lists using REST
  4. SharePoint 2013: Business Connectivity Services – Client Object Model
  5. SharePoint 2013: Business Connectivity Services – Events and Alerts

SharePoint 2013 – Enhancing your BCS experience

After playing around and digging into the BCS playground for a while, these are some of the initial enhancements I’ve discovered and played with:

Support for OData in BCS

With SharePoint 2013 we now have access to the so called "Open Data protocol", or OData. This is a protocol that enables us to access data sources in a way that we haven’t been able to previously – using specially constructed URL’s.

Read more on OData here: http://www.odata.org/ecosystem

In the next article in this series, I will talk about how you can consume OData through BCS in a SharePoint 2013 solution. Stay tuned!

Events and Alerts from external systems

One of the features we’ve missed in a lot of our solutions built on BCS is the ability to simply hook up an alert or trigger an event when things happen in the BCS data source. In SharePoint 2013 this has been addressed and we now have the ability to actually trigger some events and subscribe to alerts. Exciting news indeed.

This is pretty cool and lets the external data source notify SharePoint about things that has changed and trigger some kind of action as a response to that. Read more about Events and Alerts for BCS

In one of the next articles in this series, I will talk about events and alerts more thoroughly and walk you through the process of creating a solution and subscribe to events happening in the data source. Happy times!

Building App-scoped External Content Types

Have you heard about the new App-model for SharePoint and Office? Well if you haven’t, go to Bing.com and perform a search for it and check it out – AWESOME!

Anyway, I’ve been looking a lot on Apps lately, and that obviously means a lot of thoughts coming together around the topic. One thing I’ve found that is pretty interesting, speaking of BCS, is that you can create an App-scoped external content type to consume external data. This esseitnally means that you don’t have to deploy your solution to the Farm anymore, but can deploy it as an App instead.

In one of the next articles in this series, I will talk about building App-scoped external content types. Until then, you can find more info here: App-scoped external content types in SharePoint 2013

Enhanced support for the Client Object Model / REST

Obviously one of the heavy pushes Microsoft is doing, is to focus a lot on the Client API’s. With this focus there has been some improvements in terms of working with BCS Entities from the Client Object Model as well. There has been some pretty neat enhancements, which I will discuss in one of the articles in this series as well. Until then, take a look here: Get started using the client object model with external data in SharePoint 2013

Summary

This is the first post laying out the topics of my next few BCS articles for SharePoint 2013. If there’s something you’d like to explore or talk about, feel free to drop a comment and I’ll see if it can make it into the article series.

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

If you haven’t followed the trend today, you’ve most certainly missed out. The Office 2013 and SharePoint 2013 preview versions were released to the wild today. Steve Ballmer did a LIVE pressrelease where he revealed that the new versions of Office and SharePoint are now officially available for public beta consumption. This is pretty amazing news, so I’ll just leave you with the following information and links and you can be certain that I’ll update this blog regularly with content of SharePoint 2013 awesomeness from this day forth :-)

office15

Download the SharePoint 2013 preview binaries

You can find some of the relevant downloads right here:

SharePoint 2013 binaries

SharePoint Designer 2013 binaries

 

SharePoint Server 2013 SDKs

 

Microsoft Office Web Apps Server 2013

 

Duet Enterprise for SharePoint 2013 and SAP 2.0

 

Enjoy – and see you on the other side :-)

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

One of he culprits of working with the LINQ to SharePoint contexts that are being auto-generated for us is that it actually doesn’t necessarily render all the fields or properties you require. For example the built-in fields "Created By" or the "Attachments" property of a list item. Oftentimes we need these fields in our queries, whether they be CAML queries or LINQ queries. With LINQ to SharePoint you have the ability to extend an Entity that you’ve generated, and allow additional code to be executed for those entities. In this article I’ll talk briefly about how you can create and manage an extension to your LINQ context to allow additional fields in your queries.

One of the powerful advantages of doing this is that you post-deployment can extend your queries. Lets for example say that the list in your deployment gets some additional fields in the future, then perhaps you’ll need to make sure that your LINQ context can cope with querying those fields as well – this can easily be done by extending an entity context as I’ll talk about below.

In this article I’ll give you a sample of how to extend the LINQ entity to fit our custom needs, including mapping the "Created By" and "Modified By" fields and some additional logical customizations.

Generate your context

First and foremost we’ll need to generate our context. This can be done using SPMetal (Which you can read more about in this article: Getting Started with LINQ to SharePoint), or for example the extensions in CKS:Dev. I’m going to generate a quick and easy sample for this by utilizing the CKS:Dev tools and then build my samples on top of that.

If you haven’t installed the CKS:Dev tools yet, go right ahead and do that – then get back to this step!

1) Generate a LINQ context by using the CKS:Dev plugin, which will give you this additional context menu in the SharePoint server browser:

image

This should auto-generate a class file for you, containing all the entities for the selected site:

image

The file contains all the LINQ to SharePoint data context information that a normal SPMetal.exe command would generate, since this tool in essense is using SPMetal.exe:

image

As you can see, we’ve easily created the LINQ data context that we need, but unfortunately it’s missing some vital parts for our application – namely some of the built-in fields that you would normally want to work with. In my case, I was lacking the "Created By" and "Modified By" fields and I’d have to find a way to extend the LINQ context to cope with this so my queries can be easily constructed even for those fields.

Extending an entity in your generated LINQ context

In order for us to be able to extend the LINQ context, Microsoft provides us with an interface called ICustomMapping.

We’ll start by inheriting a new class from this interface, and name the class along the lines of the entity we want to extend.

// ----------------------------------------------------------------------- 
// <copyright file="AnnouncementExtension.cs" company="TOZIT AB"> 
// Awesomeness. 
// </copyright> 
// -----------------------------------------------------------------------

namespace TOZIT.Samples.ExtendingLinq 
{ 
    using Microsoft.SharePoint.Linq;

    /// <summary> 
    /// A LINQ extension to the Announcement-entity 
    /// </summary> 
    public partial class Announcement : ICustomMapping 
    { 
        public void MapFrom(object listItem) 
        { 
            throw new System.NotImplementedException(); 
        }

        public void MapTo(object listItem) 
        { 
            throw new System.NotImplementedException(); 
        }

        public void Resolve(RefreshMode mode, object originalListItem, object databaseListItem) 
        { 
            throw new System.NotImplementedException(); 
        } 
    } 
} 

Make sure you follow these rules:

  • Class name should be the same name as the entity you’re extending (partial class)

Let’s go ahead and add the necessary code to extend our entity. I’ll throw in some simple sample code here on how to map the fields we want, and to add additional logic to our new extension.

// -----------------------------------------------------------------------
// <copyright file="AnnouncementExtension.cs" company="TOZIT AB">
// Awesomeness.
// </copyright>
// -----------------------------------------------------------------------

namespace TOZIT.Samples.ExtendingLinq
{
    using Microsoft.SharePoint;
    using Microsoft.SharePoint.Linq;

    /// <summary>
    /// A LINQ extension to the Announcement-entity
    /// </summary>
    public partial class Announcement : ICustomMapping
    {
        #region Properties

        /// <summary>
        /// Gets or sets the Created Byproperty
        /// </summary>
        public string CreatedBy { get; set; }

        /// <summary>
        /// Gets or sets the Created By Login Name property
        /// Returns a reader-friendly version of the user's loginname
        /// </summary>
        public string CreatedByLoginName { get; internal set; }
        /// <summary>
        /// Gets or sets the Modified By property
        /// </summary>
        public string ModifiedBy { get; set; }

        /// <summary>
        /// Gets or sets the Modified By Login Name property
        /// Returns a reader-friendly version of the user's login name
        /// </summary>
        public string ModifiedByLoginName { get; internal set; }

        #endregion

        #region Methods

        /// <summary>
        /// Assigns a field (column) to a property so that LINQ to SharePoint can read data 
        /// from the field in the content database to the property that represents it.
        /// </summary>
        /// <param name="listItem"></param>
        [CustomMapping(Columns = new[] { "Editor", "Author" })] // Needs to be the InternalName of fields..
        public void MapFrom(object listItem)
        {
            var lItem = listItem as SPListItem;

            if (lItem != null)
            {
                // === MAP THE AUTHOR-FIELD ===

                // Map the Created By field to the Author (Created By) field
                CreatedBy = lItem["Author"].ToString();

                // Map the CreatedByLoginName field to the Author's actual LoginName
                SPField authorField = lItem.Fields.GetFieldByInternalName("Author");
                var authorFieldValue = authorField.GetFieldValue(lItem["Author"].ToString()) as SPFieldUserValue;
                if (authorFieldValue != null)
                {
                    CreatedByLoginName = authorFieldValue.User.LoginName;
                }

                // === MAP THE EDITOR-FIELD ===
                // Map the Modified By field to the Editor (Modified By) field
                ModifiedBy = lItem["Editor"].ToString();

                // Map the ModifiedByLoginName field to the Editor's actual LoginName
                SPField editorField = lItem.Fields.GetFieldByInternalName("Editor");
                var editorFieldValue = editorField.GetFieldValue(lItem["Editor"].ToString()) as SPFieldUserValue;
                if (editorFieldValue != null)
                {
                    ModifiedByLoginName = editorFieldValue.User.LoginName;
                }
            }
        }

        /// <summary>
        /// Assigns a property to a field (column) so that LINQ to SharePoint can save 
        /// the value of the property to the field in the content database.
        /// </summary>
        /// <param name="listItem">List Item</param>
        public void MapTo(object listItem)
        {
            var lItem = listItem as SPListItem;
            if (lItem != null)
            {
                lItem["Author"] = CreatedBy;
                lItem["Editor"] = ModifiedBy;
            }
        }

        /// <summary>
        /// Resolves discrepancies in the values of one or more fields in a list item with respect to its current client value, 
        /// its current value in the database, 
        /// and its value when originally retrieved from the database
        /// 
        /// Read more: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.linq.icustommapping.resolve.aspx
        /// </summary>
        /// <param name="mode">Refresh Mode</param>
        /// <param name="originalListItem">Original list item</param>
        /// <param name="databaseObject">Database object</param>
        public void Resolve(RefreshMode mode, object originalListItem, object databaseObject)
        {
            var origListItem = (SPListItem)originalListItem;
            var dbListItem = (SPListItem)databaseObject;

            var originalCreatedByValue =    (string)origListItem["Author"];
            var dbCreatedByValue =          (string)dbListItem["Author"];

            var originalModifiedByValue =   (string)origListItem["Editor"];
            var dbModifiedByValue =         (string)dbListItem["Editor"];

            if (mode == RefreshMode.KeepCurrentValues)
            {
                // Save the Current values
                dbListItem["Author"] = CreatedBy;
                dbListItem["Editor"] = ModifiedBy;
            }
            else if (mode == RefreshMode.KeepChanges)
            {
                // Keep the changes being made
                if (CreatedBy != originalCreatedByValue)
                    dbListItem["Author"] = CreatedBy;
                else if (CreatedBy == originalCreatedByValue && CreatedBy != dbCreatedByValue)
                    CreatedBy = dbCreatedByValue;

                if (ModifiedBy != originalModifiedByValue)
                    dbListItem["Editor"] = ModifiedBy;
                else if (ModifiedBy == originalModifiedByValue && ModifiedBy != dbModifiedByValue)
                    ModifiedBy = dbModifiedByValue;
            }
            else if (mode == RefreshMode.OverwriteCurrentValues)
            {
                // Save the Database values
                CreatedBy = dbCreatedByValue;
                ModifiedBy = dbModifiedByValue;
            }
            
        }

        #endregion
    }
}

As you can see in my sample above, I not only mapped the two fields I need but I also slightly extended the entity with additional properties for retrieving a clean LoginName from the user-objects. Obviously this can be done using a normal .NET 3.5 extension method on the SPUser object, but here I implemented it in the context to show how it can be easily extended to fit our needs.

Writing a query with our extended data context

So if we’ve followed the very simple steps of creating a new extension for one of our entities, we can now easily access these details from the query we’re writing:

image

using System.ComponentModel; 
using System.Web.UI.WebControls.WebParts; 
using Microsoft.SharePoint;

namespace TOZIT.Samples.ExtendingLinq.LinqWebPart 
{ 
    using System.Linq; 
    using System.Web.UI.WebControls;

    [ToolboxItemAttribute(false)] 
    public class LinqWebPart : WebPart 
    { 
        protected override void CreateChildControls() 
        { 
            using (var ctx = new SocialflowDataContext(SPContext.Current.Web.Url)) 
            { 
                // Fetches the items where the current user is the creator 
                var myItems = from item in ctx.Announcements 
                                  where item.CreatedByLoginName == SPContext.Current.Web.CurrentUser.LoginName 
                                  select item;

                foreach (var item in myItems) 
                { 
                    Controls.Add(new Literal { Text = item.Title + " : " + item.CreatedByLoginName + " (you) created this item<br/>" }); 
                } 
            } 
        } 
    } 
} 

The list contains a few items (all created by the current user, which is the system account – please use your imagination… :-)  )

image

And the results:

image

Summary

All in all, this gives us the flexibility to customize the way we do our queries in SharePoint using LINQ. I’ve gotten the question about extending LINQ to SharePoint quite a few times over the past years, so I thought it’d be time to reflect those thoughts in this post. I hope you enjoy it and can start utilizing the extension of your LINQ queries!

Enjoy!

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

A while back an announcement was made that TFSPreview.com had been made available for general testing. Various bloggers at Microsoft put an invitation token in their MSDN blogs so everyone can have a go at it.

In this article series we’ll take a very quick look at what the hosted TFS solution by Microsoft looks like.

Articles currently in the series:

Steps to hook up a project to your new build server

This article obviously assumes that you’ve already followed along with the previous articles and hooked up a build configuration for  your TFSpreview account. Now we’ll take a look at how we can get our projects hooking up in a CI/Automated Build scenario with Team Foundation Services.

The steps from this point onwards are basically the same as if it would be an on-premise TFS server in your own domain. Your build server is configured, your code is hosted in TFS and all you’ll need to do is connect your project to the actual TFS and then create a new build definition.

Create a new project (or connect an existing one) and connect to TFS

We’ll start from the beginning and create a new Visual Studio 2010 project (in my case it’ll be an Empty SharePoint Project), and remember to tick the Checkbox "Add to source control":
image

Make sure that the project is connected to your TFS server, check in the source and we can get started:
image

Create a new build definition

At this point (if you’ve followed the articles in this article series) you should have a TFS server, a connection from Team Explorer to your TFS server and also a new project hooked up in your repository. Now its time to create our first build definition so we can automate the builds and deployments.

Start by navigating to your Team Explorer and right-click on Builds and then click the "New Build Definition…":
image

This will give you the following new dialog where you can specify details for your build:
image

Move on to the "Trigger" tab. In my case I want to enable CI (Continous Integration) for my project:
image

Move on to the "Workspace" tab. In my case I’ll leave the Source Control Folder as the default root as seen below. You can choose to specify a specific TFS project if you don’t want to include all.
image

Move on to the "Build Defaults" tab. You’ll need to specify a build controller (you should have one here since we created one in the previous article). You will also need to specify a drop folder, where your binaries are going to be delivered upon the build: 
image

Move on to the "Process" tab. This is where things get really interesting. You can from this dialog specify a variety of different variables for your project when it builds. I’m not going to dig into details here because my good mate Chris O’Brien have covered all of that in his article series about automated builds.
image

Save the build definition and validate that it appears in the Team Explorer:
image

Test the build configuration

In order to validate that our setup now works with TFSpreview.com and our own build server and to validate our newly created build definition, simply make some changes to your project and check it in and have it automatically schedule a new build (We chose Continuous Integration, which will build on each check in). You can see that the build is now scheduled and currently running:
image

And after a while you can validate that it is Completed:
image

The final validation is of course to see the drop folder that we specified and make sure that it now contains our newly built sources:
image

Voila. Build seems to be working.

Summary

This post was intended to give you an overview over the simplicity of creating a simple build definition and getting started with automated builds in TFSpreview (hosted Microsoft TFS). Pretty neat and it seems to be working just the way we want.

Obviously there’s some apparent questions like:

  • What if I want to output my .wsp files as well?
  • What if I want to execute a specific script upon the execution of the build so I can automate test-deployments?
  • Etc. etc.

My first recommendation is to visit Chris O’Brien and read all the posts in his CI/automation series which is simply amazing.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

A while back an announcement was made that TFSPreview.com had been made available for general testing. Various bloggers at Microsoft put an invitation token in their MSDN blogs so everyone can have a go at it.

In this article series we’ll take a very quick look at what the hosted TFS solution by Microsoft looks like.

Articles currently in the series:


Getting your first scheduled build up and running

In order to get a scheduled build that talks to your TFSPreview repository, you’ll need to follow these steps and make sure the prerequisites are fulfilled.

Prerequisites

Note: If you don’t have SharePoint 2010 installed, the installer have an option for installing SharePoint Foundation 2010 for you. In my case however, I’ve got SharePoint Server 2010 Enterprise running already.

Installing the package

First of all, launch the ISO file that was extracted from the downloaded package and you should see this screen:
image

We want to install Team Foundation Server before we proceed, so choose the first option under the Install headline, which will bring you to this dialog:
image

You’ll need to accept the EULA and if you’re awesome you’ll keep the second checkbox checked so Microsoft can review any issues that may be encountered during the process so they can have a look at them pre-RTM. Click Continue and then click Install Now in the dialog that follows:
ScreenShot1300

You may or may not need to reboot the computer while it’s performing the installation, depending on whether you’ve had some of the prerequisite artifacts installed prior to the installation or not.

Now just sit tight for a while as the installer takes care of the installation for you. Grab a newspaper, get a coffee, check some important stuff on Twitter or simply multitask with other things while you wait.

When it’s done, you’ll have a few options of what type of installation you want to do:
image

Please note: At this point you have several options for how to proceed with your installation. You can now choose one of the following installation options:

  • Configure Team Foundation Application Server
  • Configure Team Foundation Server Proxy
  • Configure Team Foundation Build Service
  • Configure Extensions for SharePoint Products

In my case I’ll be choosing the "Configure Team Foundation Build Service" since I only need the actual Build Agents and Build Service – the TFSpreview.com is hosting the actual TFS server.

Next step will present you with a dialog like the following, where you’ll have to choose what default team project collection to utilize for the build server. Since we don’t have TFS installed the box is currently empty, but fear not for your TFS server is hosted in the cloud (tfspreview.com, remember?) so we’ll just have to go and add that connection as well.

Click the "Browse…" button to open the dialog for choosing your TFS connection
image

If you haven’t already connected to a TFS server, this dropdown will be empty. Click "Servers…":
image

Click the "Add…" button:
image

Finally enter the URL to your TFS collection and click "OK":
image

You will see a dialog that enables you to log in to the services (use the Login details you signed up with for tfspreview.com):
image

When the sign-in is completed you’ll see that you now have a list of TFS collections. Choose your DefaultCollection (or otherwise) and click "Connect":
image

It should hopefully say something like this, telling you there’s no build servers unless you’ve already configured it previously:
image

In the next dialog I’ll choose "Use the default setting" for my setup:
image

In the next dialog you’ll have to choose credentials for your build rig. I’m using a dedicated domain account called SHAREPOINTSPBuild:
image

Make sure you validate the configuration and then press "Next"
image

If awesomeness is found on your machine, it should look something like this:
image

Click the "Configure" button and let the installer have its way for a while. Hopefully all these fancy green icons will show you that things went smoothly:
image

With that done, in the next dialog you’ll see a nice "Success" message and you’re ready to start creating and work with your build agents:
image

Validate the Build Server

On your Start Menu, you should find the following new shortcut:
image

Clicking the "Team Foundation Server Administration Console" should bring you forth the following dialog where you can validate that your machine is properly up and running with a build server and agents. Click the "Build Configuration" option in the menu to the left and make sure your build agents are running under the controller:
image

Summary

If you’ve followed along with the steps in this post you’ll see how easy it is to get up and running with creating a build server (controllers+agents) for your TFS. In this case, I chose to do a connection to the TFSpreview-hosted TFS account.

In my next post in this series I’ll talk about how you can create a new build from Visual Studio 2010 from your dev-machine and have it automatically build on this build server. Gotta love automation!

Enjoy.

Author: Tobias Zimmergren
http://www.zimmergren.net | http://www.tozit.com | @zimmergren

Introduction

A while back an announcement was made that TFSPreview.com had been made available for general testing. Various bloggers at Microsoft put an invitation token in their MSDN blogs so everyone can have a go at it.

In this article series we’ll take a very quick look at what the hosted TFS solution by Microsoft looks like.

Articles currently in the series:


Connect Visual Studio 2010 to your new hosted team project

In order to be able to connect to the hosted TFSPreview team project, you’ll need to comply with the prerequisites I’m listing here.

Prerequisites

Hook up Visual Studio to your new repository/project

Alright, if you’ve downloaded and installed KB2581206 (which means you’re spinning VS2010 SP1 already) you are read to connect. The procedure to connect to the hosted TFS service is basically the same as if you were to connect to any other TFS repository, which is easy and awesome.

In Visual Studio 2010 SP1, simply make these smooth ninja moves and you’re done:
image

Make sure to fetch the URL of your account (As seen in your dashboard, like depicted below):
image

Enter this URL in the Visual Studio 2010 dialogs and we’re ready to kick off:
image

It’ll ask you for your credentials which you need to use to verify your account details:
image

You should now be authenticated and your repository should be available:
image

You’ll go ahead as you normally do and choose the projects that interests you and then you’re basically done:
image

Your Team Explorer should contain your TFS project and you should be able to work with it as you normally would from Visual Studio 2010:
image

This means you’ve got all of your standard tasks and operations available straight from VS 2010 (So you don’t have to go to the website to make changes …):
image

Summary

Easy enough. As soon as you’ve downloaded the required tooling to get connected, you can hook up your new cloud-hosted team project in Visual Studio 2010 without any problems. Give it a spin, it flows quite nicely!

Enjoy.

Author: Tobias Zimmergren | www.tozit.com | @zimmergren

Introduction

Sometimes when you’re in a development project you can feel the pain of debugging. If there’s a lot of code floating around it may be hard to sort out the method calls and how the depend on each other if it’s a very complex solution. To ease the task of debugging there’s a great VS 2010 plugin called Debugger Canvas, which will help you to sort out a lot of the hassle while debugging.

In this article we’ll just take a quick look at what Debugger Canvas is and how it can assist us in our daily debugging adventures.

Getting Started with Debugger Canvas

Firstly, you obviously need to download the extension for Visual Studio 2010, which can be done HERE.

Please note: The Debugger Canvas Extensions are only available for VS 2010 Ultimate

Debugger Canvas in Action

When you’ve installed the extension, there’s a few new opportunities presented when debugging. Your new “F5” experience will be based on the new Debugger Canvas UI instead of the traditional debugging experience which means you’ll be able to more easily follow the calls within your code, like this:

image

When you step into the code deeper, you’ll see how the calls were made quite easily:

image

Summary

You should definitely take a look at Debugger Canvas if you haven’t already as it’ll be most helpful for you in your development adventures.

Get a better overview here and watch the introductory video!

Enjoy.