Monday, May 9, 2022

Latest Azure PaaS Sitecore Logs using a single line of PowerShell

If you’re anything like me, you probably don’t have a passion for manually digging through the series of hundreds of randomly dated folders that look like this in search of the latest Sitecore logs:


Hello darkness, my old friend

Although several tools and approaches are available (including this nifty tool credited to fellow Sitecore MVP Kiran Patil - as well as some of my own previous posts from 2018 and 2019 covering this topic), I've recently adopted a different strategy that's proved to be successful across several Sitecore PaaS clients for quickly obtaining the latest physical Sitecore log for a given server. 

The post-worthy kicker?  It's one line of PowerShell:

$kuduHost = "https://yourazuresitename-xp2-cd.scm.azurewebsites.net"; Write-Output "`n[ LATEST SITECORE LOGS ]`n"; $array = @(); Get-ChildItem "C:\home\site\wwwroot\app_data\logs" -File -Recurse | Where-Object { $_.FullName -match "azure.*.txt" -and $_.LastWriteTime -gt (Get-Date).AddHours(-12) } | ForEach-Object { $path = $_.FullName.replace("C:\home\site\wwwroot\app_data\logs\", "$kuduHost/api/vfs/site/wwwroot/App_Data/logs/"); $array += "`n[$($_.LastWriteTime)]`n$path`n"}; $array | Sort-Object $_.LastWriteTime | Select-Object -Last 3

๐Ÿ˜ฌ

Okay, it's...kind of a long one-liner...but one line nevertheless 

The above example outputs direct links to the latest three physical Sitecore log files, which match the pattern 'azure.*.

In practice, the desired file can be highlighted from the console and at which point you can copy the URL or open it in a new tab.



Let's break it down

The first line defines a variable for the KUDU host you're using:

$kuduHost = "https://yourazuresitename-xp2-cd.scm.azurewebsites.net"

The second line outputs a (wholly arbitrary and unnecessary) title:

Write-Output "`n`[ LATEST SITECORE LOGS ]`n"

The third line represents an array variable aptly named `$array` (because I'm clever):

$array = @();

This is where it gets exciting. This Get-ChildItem cmdlet gets all files recursively under the site's `\App_Data\logs` location:

Get-ChildItem "C:\home\site\wwwroot\app_data\logs" -File -Recurse
Neat!

We can pipe in a Where-Object cmdlet to filter only file names that match 'azure.*.txt' (or if you want all log types - Publishing, Crawling, Dianoga, SPE, etc. - *.txt) and provide a 12-hour threshold against the `LastWriteTime` property:

Get-ChildItem "C:\home\site\wwwroot\app_data\logs" -File -Recurse |
Where-Object {$_.FullName -match "azure.*.txt" -and $_.LastWriteTime -gt (Get-Date).AddHours(-12)}

We can then pipe in a ForEach-Object cmdlet to iterate through each file:

Get-ChildItem "C:\home\site\wwwroot\app_data\logs" -File -Recurse | Where-Object { $_.FullName -match "azure.*.txt" -and $_.LastWriteTime -gt (Get-Date).AddHours(-12) } | ForEach-Object { $path = $_.FullName.replace("C:\home\site\wwwroot\app_data\logs\", "$kuduHost/api/vfs/site/wwwroot/App_Data/logs/"); $array += "[$($_.LastWriteTime)]`n$path`n"}

Notice that in the ForEach-Object cmdlet, we create a variable called `$path` and set it to a string that takes the file's FullName and replaces the 'system path' portion, and replace it with our `$kuduHost` variable concatenated to `/api/vfs/site/wwwroot/App_Data/logs/`.

$path = $_.FullName.replace("C:\home\site\wwwroot\app_data\logs\", "$kuduHost/api/vfs/site/wwwroot/App_Data/logs/")

Without this string replacement, we'd only get the system path for the files in the dataset, which would still require navigating to the file manually:

Also, within the ForEach-Object cmdlet, a formatted string containing the LastWriteTime and the `$path` variable is added to the `$array` variable:

$array += "[$($_.LastWriteTime)]`n$path`n"

The `n used above allows for line breaks.

After the files have been processed, the `$array` variable is called and sorted by LastWriteTime.

A Select-Object cmdlet is piped in to limit the number of results to 3:

$array | Sort-Object $_.LastWriteTime | Select-Object -Last 3


By combining these together, eliminating spaces, and adding semi-colons to separate commands, we've got our one-liner! ๐Ÿ•บ

Bonus: IIS HTTP Request Logs

Using the same approach with a few modifications, the application's raw IIS HTTP Request Logs can also be obtained (differences bolded below):

$kuduHost = "https://yourazuresitename-xp2-cm.scm.azurewebsites.net"; Write-Output "`n[ LATEST IIS LOGS ]`n"; $array = @(); Get-ChildItem "C:\home\LogFiles\http\RawLogs" -Recurse | Where-Object { $_.FullName -match ".log" -and $_.LastWriteTime -gt (Get-Date).AddHours(-12) } | Sort-Object $_.LastWriteTime | ForEach-Object { $path = $_.FullName.replace("C:\home\LogFiles\http\RawLogs\", "$kuduHost/api/vfs/LogFiles/http/RawLogs/"); $array += "[$($_.LastWriteTime)]`n$path`n"}; $array | Sort-Object $_.LastWriteTime | Select-Object -Last 3


Final Thoughts

You can generate variations of this one-liner by changing the various variables, which can be shared with the rest of your development/troubleshooting team and readily ready to copy from an internal Wiki:

Feel free to use and modify the script as you see fit. ๐Ÿš€

Friday, April 1, 2022

'Tracker.Current.Session.Interaction should not be null' CountryCondition

Uh oh. 

The following errors are heavily present on my client's content delivery servers:

ERROR Evaluation of condition failed. Rule item ID:
Unknown, condition item ID: {9A4BEB4B-4B0F-4392-A798-124CEC8AADA4}

Exception: Sitecore.Framework.Conditions.PostconditionException

Message: Postcondition 'Tracker.Current.Session.Interaction should not be null' failed.

Source: Sitecore.Framework.Conditions
at Sitecore.Framework.Conditions.EnsuresValidator`1.ThrowExceptionCore(String condition, String additionalMessage, ConstraintViolationType type)
at Sitecore.Framework.Conditions.Throw.ValueShouldNotBeNull[T](ConditionValidator`1 validator, String conditionDescription)
at Sitecore.Framework.Conditions.ValidatorExtensions.IsNotNull[T](ConditionValidator`1 validator)
at Sitecore.ContentTesting.Rules.Conditions.CountryCondition`1.Execute(T ruleContext)
at Sitecore.Rules.Conditions.WhenCondition`1.Evaluate(T ruleContext, RuleStack stack)
at Sitecore.Rules.Conditions.OrCondition`1.Evaluate(T ruleContext, RuleStack stack)
at Sitecore.Rules.Conditions.WhenRule`1.Execute(T ruleContext)
at Sitecore.Rules.Conditions.WhenCondition`1.Evaluate(T ruleContext, RuleStack stack)
at Sitecore.Rules.RuleList`1.Run(T ruleContext, Boolean stopOnFirstMatching, Int32& executedRulesCount)

We're talking thousands upon thousands of these errors. 

The error references a GUID we can use to determine the context of this:
{9A4BEB4B-4B0F-4392-A798-124CEC8AADA4}
Moreover, the error's message hints are what ultimately causes the error to trigger: 
Tracker.Current.Session.Interaction should not be null
For context, the GUID refers to the following out-of-the-box Sitecore item:
/sitecore/system/Settings/Rules/Definitions/Elements/Predefined Rules/Predefined Rule
Nothing to see here...


After digging in, I found that the client had configured personalization on a Controller Rendering for their site's homepage component rendering so that depending on the user’s country, unique content is presented. They've implemented the following Predefined rules on the rendering:

- visitor is located in the Asia Pacific region
- visitor is located in the UK
    - or where visitor is equal to the United Kingdom
- where visitor is located in Continental Europe

These are custom rules they've added that categorize a list of countries into a region they want to target.

For example, the `visitor is located in the Asia Pacific region` looks like this:
where the country is equal to Australia
or where the country is equal to Brunei Darussalam
or where the country is equal to China
or where the country is equal to Hong Kong
or where the country is equal to Singapore
or where the country is equal to Taiwan
or where the country is equal to Thailand
or where the country is equal to New Zealand
or where the country is equal to Qatar
or where the country is equal to Turkey
or where the country is equal to Malaysia
or where the country is equal to Russian Federation
or where the country is equal to India
or where the country is equal to Indonesia
or where the country is equal to United Arab Emirates
or where the country is equal to Philippines
or where the country is equal to Israel
or where the country is equal to Pakistan
or where the country is equal to Saudi Arabia
or where the country is equal to Vietnam
or where the country is equal to Japan

Each country has been defined using the out-of-the-box `GeoIP` > `where the country compares to specific countries` condition template.

This line in the error denotes that the `CountryCondition` is the culprit in regards to the recurring error:
at Sitecore.ContentTesting.Rules.Conditions.CountryCondition`1.Execute(T ruleContext)
According to a reflection of the `Sitecore.ContentTesting.Rules.Conditions` namespace in Sitecore.ContentTesting.dll for v10.0.0, this series of checks in the first few lines of the `CountryCondition<T> : VisitCondition<T> where T : RuleContext` class's Execute() method:

[GIST_START]
[GIST_END]

My hunch was that the last check was causing the failure, and instead of failing silently, it throws an error - specifically often in cases where bot-like traffic is involved. 
Condition.Ensures<CurrentInteraction>(Tracker.Current.Session.Interaction, "Tracker.Current.Session.Interaction").IsNotNull<CurrentInteraction>();

To quell the flood of errors initially, I applied a new stand-alone rule preceding each existing visitor is rule located in {REGION_NAME} region called `visitor is human` (also out-of-the-box) to attempt to deflect bots and spam where a country code or identifiable contact/interaction is unlikely to yield a country (which would result in an error):

After applying these changes and publishing them, we see those errors disappear entirely from the error logs.  However, the introduction of the `visitor is human` rule proceeding the rest had an unintended effect: first-time visitors always failed this first condition, causing them to only see the personalized rendering after reloading the page. Removing the rule enabled users to receive the personalized rendering on the first load but reintroduced the flood of errors.

It seemed that instances where `Tracker.Current.Session.Interaction` is null may be caused by bot traffic according to: https://support.sitecore.com/kb?id=kb_article_view&sysparm_article=KB0960279

Given that this is version 10.0.0, this particular bug should have been resolved in all versions above 8.1.

Upon decompiling and comparing the GetTracker : CreateTrackerProcessor > Process() override in Sitecore.Support.424667.81.dll for v8.1+ to the 10.0.0 Sitecore.Analytics.dll's GetTracker : CreateTrackerProcessor > Process method, there's one notable difference:

v8.1+


v10.0.0

It was unclear if we need a similar patch to implement `args.set_Tracker(new DefaultTracker());` in the Process() method.

I raised the question with Sitecore Support to explore one of two resolutions:

1) A patch for suppressing errors in Sitecore.ContentTesting.Rules.Conditions.CountryCondition's Execute() method, when the `Condition.Ensures<CurrentInteraction>(Tracker.Current.Session.Interaction, "Tracker.Current.Session.Interaction").IsNotNull<CurrentInteraction>();` is not satisfied (perhaps by returning false).

or

2) An override patch for the tracker creation similar to the https://support.sitecore.com/kb?id=kb_article_view&sysparm_article=KB0960279 hotfix.

Sitecore Support was able to reproduce the reported behavior in a clean local instance of Sitecore 10.0.0 (Initial Release) without involving any of our specific customizations. They registered the issue as a bug in their bug tracking system (future reference number 456739).

Sitecore's official statement regarding the bug: 
The reported behavior occurs because Sitecore.Rules.Conditions.WhenRule class inherits CanEvaluate() method from Sitecore.Rules.Conditions.RuleCondition class which always returns true. That is why the rule is not skipped so the evaluation continues and leads to the exception. 
In order to workaround the issue, please try to avoid the usage of Conditional renderings rules.
Please assign conditions to your renderings directly without wrapping them into conditional renderings rule.
 
In this case, Sitecore will trigger CanEvaluate method related to the specific condition, so the condition will be skipped.

I ended up pursuing the following approach myself:

  1. Decompiled the `Sitecore.ContentTesting.dll` and copied the Sitecore.ContentTesting.Rules.Conditions.CountryCondition class into our solution called `CountryConditionOverride`.  I used Telerik JustDecompile for this. 

  2. Commented the offending line (48) and replaced it with a simple null check that returns false.

  3. Enabled the custom override class by updating the `Type` field on the following default Sitecore item and published it:
/sitecore/system/Settings/Rules/Definitions/Elements/GeoIP/Country
{52E42C59-7210-43E5-94A6-3EA6B98835B8}
Field: Script (section) > Type 
Old value:  `Sitecore.ContentTesting.Rules.Conditions.CountryCondition,Sitecore.ContentTesting`
New value: `Foundation.Overrides.Rules.Conditions.CountryConditionOverride, Foundation.Overrides`

The final override class:


This resolves the issue with erroneous error log entries while retaining the expected Country condition functionality when the `Tracker.Current.Session.Interaction` happens to not be null.

I've shared my resolution with Sitecore Support and will keep an eye on an official hotfix (reference 456739), which I will share once it's available. 

Hopefully, in the meantime, this helps someone out there in a similar scenario!

Thursday, September 23, 2021

Sitecore Cache Tuning: LAYOUT_DELTA_CACHE

While tuning caches for a production level Sitecore 10.0.0 site, I came across a cache name I was unfamiliar with using a cache tunerLAYOUT_DELTA_CACHE


There were also log entries specific to this cache:

4484 12:07:57 INFO Cache created: 'LAYOUT_DELTA_CACHE' (max size: 50MB, running total: 6580MB)

Oddly enough, at the time of this post, no Google Search results mentioned this cache name.


Sitecore's documentation also had no mention of the setting either. 

Reaching out to Sitecore Support helped clarify things, and I wanted to share in case anyone else happens to come across this same cache that needs tuning. 

"The cache seems to be utilized when applying layout deltas when Sitecore is retrieving the layout field. The default size of this cache is 50MB, however, you can modify it with the "Caching.LayoutFieldDeltaCacheSize" setting."

Check out the following example configuration:


Hopefully, this helps with your Sitecore cache tuning activities. Happy tuning! ๐Ÿ˜Š

Monday, July 26, 2021

Docker Compose Failing; 'unknown flag: --d'

My Windows Docker installation was recently updated to version 3.5.2 (66501), after which commands like docker-compose up --d failed with the following errors:

level=warning msg="network default: network.external.name is deprecated in favor of network.name" not implemented


unknown flag: --d


As it turns out, this is due to an experimental feature that seems to be enabled by default in Docker's Experimental Features settings:


The solution, for now, was to simply disabled the experimental feature by unchecking the `Use Docker Compose V2` checkbox in the settings; or run `docker-compose disable-v2`.
More information about the Compose V2 beta CLI can be found here: 

Compose V2 beta | Docker Documentation

Wednesday, May 26, 2021

Resolving Docker Container Networking Issues while connected to VPN using Cisco AnyConnect

After prepping and polishing a custom legacy Sitecore 8.2 Docker environment for our developers, we ran into a significant blocker that had us questioning whether we needed to backtrack and use locally installed Sitecore instances instead. 

The blocker stemmed from this particular client's VPN: Cisco AnyConnect Mobility Client. ๐Ÿคฎ

At a high level, any time we connect to the VPN using Cisco AnyConnect, the running containers would begin to misbehave - and symptoms disappeared after VPN was de-activated.  

Symptoms included:

  1. Inability to use custom hostnames to access the site (pings from the host to the CM and Solr hostnames failed with an `Unreachable` or `Request Timeout` code).
  2. Inability to use the localhost:portnumber hostname to access the site.
  3. Complete loss of internet access from the running containers.

Because a VPN connection is required for several API-based components, it was essential to solve this.  


Some of the troubleshooting attempts included:

  • Switched ISOLATION mode from isolation to hyperv.
  • Checked and uncheck various options in the Cisco AnyConnect settings (including `Allow local (LAN) access when using VPN`).
  • Checked and unchecked various Docker settings under the General tab in Docker Desktop settings (`Expose daemon on tcp://localhost:2375 without TLS`).
  • Asserted local firewall settings.
  • Applied DNS overrides to the Docker Engine daemon.json file that matched the active DNS configuration for the VPN endpoint.
  • Applied various parameters to the docker-compose.yml file (dns, extra_hosts, etc.)
  • Fiddled with various `Advanced TCP/IP Settings` under Control Panel\Network and Internet\Network Connections in Windows.
  • Created custom Hyper-V and Docker bridge/transparent networks to try to restore internet connectivity. 
The number of tabs I had opened in my browser was unfathomable without much to show for it.  There were undoubtedly many similar issues reported across the web related to Cisco AnyConnect and Docker, but no suggestions remedied the problem.

After hours of troubleshooting, I tried to replicate the behavior with other VPN connections not using the Cisco AnyConnect client and found that none of the symptoms were present.   I couldn't find any evidence that this is an issue with Docker itself but instead caused by how Cisco AnyConnect handled connections and IP routing.  

I then came across a comment in a thread related to drive sharing with Docker when using AnyConnect: https://github.com/docker/for-win/issues/360#issuecomment-442586618 

I ♥ you, jrbercart

Since we don't have any pull over the client's VPN setup and configuration, I decided to try OpenConnect as a substitution for Cisco AnyConnect, which evidently uses the same protocol to establish a VPN connection.  

I connected to the client's VPN endpoint using OpenConnect, and all of the networking issues with the running Docker containers disappeared!  

If you happen to find yourself in a similar situation, go ahead and drop Cisco AnyConnect and give OpenConnect a try to save yourself some troubleshooting hours! ☺

Thursday, April 8, 2021

Sentiment Analysis and Keyword Extraction using Sitecore PowerShell and Microsoft Cognitive Text Analytics

Sitecore Hackathon 2021

Well...wow, it actually happened...

I managed to snag a category win for the 2021 Sitecore Hackathon! ๐Ÿ˜…


This year, I unexpectedly flew solo as my team members could not attend (both due to completely understandable reasons).  Luckily for me, one of this year's categories, in particular, made me feel like I stood a chance: "Best use of Sitecore PowerShell Extensions to help Content Authors and Marketers."

YES. YES YES 1000x YES. 

Knowing that I needed to land on something fairly quickly to complete all submission requirements (a completed module with clean code, reliable installation instructions, a well-documented README.md, and a video) my evening began with a brainstorming session listing all possible routes I could take for the next 24 hours.  

I actually landed on a similar concept I posted about a couple of years back; interacting with Microsoft's Cognitive Services using PowerShell, then focusing on content translation. I knew Microsoft had continued to update their API offerings since that post, so I started digging into what was new.  I stumbled upon the Sentiment Analytics API, which seemed like an excellent use case that could satisfy the 'help Content Authors and marketers' category requirement.  

By providing the right combination of SPE user interactivity (modal dialogs, accessibility of the utility in the Ribbon, etc.), I could build a utility that analyzes content from a given field and provide a sentence-by-sentence breakdown of the content's sentiment score using AI.

After playing around with the example APIs in the browser, I decided to create my Text Analytics Cognitive Service in Azure, grab my API keys, and fiddle around with the API further in PostMan.  At that point, I felt pretty confident that I could integrate this with SPE. ๐Ÿคž

The Sentiment Analyzer would

  • Analyze the sentiment of field content directly in Sitecore.

  • Give Content Authors the ability to run an analysis of a given field's content, which returns an overall sentiment score and a sentence-by-sentence breakdown of each sentence's sentiment score and corresponding confidence scores.

  • The results are displayed using a Show-Result modal and rendered in an easy-to-digest format.

I built the user dialog, wrote code that generated the appropriate POST data to be passed to the sentiment API endpoint, built the functions to render the data (using emojis, of course ๐Ÿ‘ฉ‍๐Ÿš€), configured a new Sitecore template and the corresponding item for API key storage then tied it all together into an SPE module that exposed the tool from the right-clicked Context Menu, and from the Ribbon.

As midnight approached, I felt that I was in decent enough shape with the Sentiment Analysis script, I could begin exploring using another API in the same Text Analytics product group. I moved forward with a second tool utilizing the API's keyphrase extraction feature without a tremendous amount of overhead; mostly endpoint changes, JSON parsing, and data rendering differences. 

The Keyword Analyzer would:

  • Analyze a field's content to extract critical keywords/phrases.

  • Give Content Authors the ability to analyze a given field's content which returns a list of extracted keywords that can then be used to manually populate a meta keywords field.

  • The results are displayed using a Show-Result modal and rendered in an easy-to-copy format. 


I got started, but a couple hours later...


Then a few hours later...

I spent most of the day (alongside juggling sick-kids priorities) polishing the scripts I had so far; resolving logic issues, error prevention, adding code comments, and overall meticulous code clean-up.

Eventually, I had a functional set of utilities. 

Buttons in the Ribbon configured in the SPE module.


Dialog when clicking either utility against an item
with a Single-Line, Multi-Line, or Rich Text field. 

Sample output of sentiment analysis

Sample output of keyword analysis

I made sure to stop by for a late morning Coffee Break. ☕


I built the final structure of the SPE module using the Module Wizard ๐Ÿง™‍♂️ to configure my integration points.  The module also stores the API Settings item, so swapping in an API key would be seamless for anyone who installs the module.  


⚡ The module looked like this in the tree:




I spent the final hours of the event packaging the module/testing the installation steps before working on multiple documentation phases (using Markdown for absolutely everything in 2020 was really coming in handy).

It wasn't long before a mid-afternoon Twitter update:



The video production was probably one of the most challenging parts of this experience.  After writing a short-handed verbal script, I tried to record the entire demo in a single recording.  I used OBS Studio to record and the built-in Video Editor in Windows for post-production.  I even squeezed some personal music snippets I composed some time ago without risking Copywrite strikes on YouTube. ๐Ÿ˜‚

The video submission can be viewed here:

By around 5 PM, I was done and had submitted my entry ๐Ÿš€

The full Github submission can be found here, including the full source code for both scripts, the module ZIP for installation, and installation steps. 

Take it for a spin if you care to! ๐Ÿคน‍♂️

I'm really humbled and proud to have been a part of the winner's circle this year.  Another big shout-out to the folks who run and judge the event, as well as a big congratulations to the other category winners!

Check out the complete 2021 Sitecore Hackathon winners announcement here: https://www.youtube.com/watch?v=YEOy7lIDZUU

I'm already looking forward to next year. ๐Ÿ“†


Friday, February 26, 2021

Sitecore Containers Prerequisite Check for Local Environments with PowerShell

If you're looking to finally dive into the world of Docker, there's no better time than now with the release of 'Sitecore XP 10.1 Initial Release'.  If you haven't worked with Sitecore Containers yet, you'll need to settle several prerequisites before starting. 

As a callback to when Sitecore 9 and SIF were all the rage and new machine prerequisites were aplenty (ref Sitecore 9 Machine Prerequisites Check with PowerShell), I spent some time developing a new,  menu-driven PowerShell script to facilitate the validation of prerequisites when setting up a local development environment using Sitecore Containers.

The sitecore-containers-prerequisites.ps1 script sets out to:

Quickly verify Sitecore Container:
  • Hardware requirements (CPU, RAM, Disk Storage, and presence of SSD)
  • Operating system compatibility (OS Build Version, Hyper-V/Containers Feature Check, IIS Running State)
  • Software requirements (Docker Desktop, Docker engine OS type Linux vs. Windows Containers)
  • Network Port Check (443, 8079, 8984, 14330)
Download and Install required software:
  • Chocolatey
  • Docker Desktop
  • mkcert 
Enables required Windows Features
  • Containers
  • Hyper-V 
Download latest 10.1.0
  • Container Package ZIP 
  • Local Development Installation Guide PDF

Demo

Selecting 'Scan All Prerequisites' option will execute all scan options (effectively each individual scans -- which are also available):


Here's a demo of the script identifying that Docker is set to use Linux Containers instead of the required Windows Containers:



I hope this helps folks new to Sitecore Containers get started confidently, knowing their machine is ready - and also bring some simplicity to those accustomed to developing with Sitecore Containers and are just setting up a new machine.

You can grab a copy of the script here: https://github.com/strezag/sitecore-containers-prerequisites 

As always, feel free to use and modify the script to fit your needs.
Leave a comment if you have any suggestions or recommendations, too!

Monday, October 19, 2020

Desktop Notifications for New Questions on Sitecore Stack Exchange using PowerShell + BurntToast + Windows Task Scheduler



If you're looking for ways to become more proactive in the Sitecore community, one great way to gain traction and potentially make a real impact is to help answer questions on the Sitecore Stack Exchange. You can give yourself opportunities to contribute to the platform by being one of the first users to read and potentially respond to new questions posted on the platform by setting up an alert that notifies you when a new question has been asked. 

Monday, August 31, 2020

๐Ÿ†• Sitecore Icon Search Update: JSS Icons

Sitecore Icon Search has been around since 2018 and is still used widely across the Sitecore development community (9,000+ visits in 2020 so far).  Generally, the app has been self-sustainable as the approach hasn't changed from version to version.  

 Last week, a couple of my colleagues sent me a request:

Gabe – do you think you can add the JSS enum as a column on Sitecore Icon Search? 

Tuesday, August 4, 2020

Sitecore 10 Docker Containers: Cannot start service solr

It's here!  Sitecore 10 has been released into the wild today and it comes with a refined developer experience that includes official container support.  This is super exciting and really helps solidify my thoughts around Docker and its role in the Sitecore developer ecosystem. 

Check out this great documentation site also released today: https://containers.doc.sitecore.com/docs/intro

Well, I jumped right in and, while things appeared to be going smoothly (all images downloaded successfully), I stumbled on this error when composing the container up:




At first glance, this looked like a collision issue with some existing Docker NAT network residual from my other Docker containers.  

I tried:
  1. Pruning the Networks using the VS Code Docker Extension:



  2. Stopping all Docker processes and its relevant services, and restarting:

  3. Restarting my machine

None of these attempts helped, unfortunately. 

If we look at how Solr is defined in the docker-compose.yml file, we'll see that the port is set to map to :8984 on your local machine to :8983 on the running Solr container. 
 

In my case, I have multiple Solr instances running on my machine from previously installed Sitecore instances: 


Whenever I installed new Solr instances, avoiding using ports that were already being used for existing Solr instances was a prerequisite (eg. if I have one version of Solr running on 8983, for the new version of Solr I'd use 8984.  If I needed another version of Solr, that one would use 8985, etc).   The same applies in this case. 

Because the default Sitecore 10 Docker Compose is trying to use port 8984, it must be available.  

I navigated to each Solr installation on the filesystem and confirmed that port 8984 was in fact mapped to my local 5.4.1 Solr instance.


By stopping the running 5.1.4 Solr service on my local machine, I was able to free up the port 8984, allowing the Solr instance in the Docker container to occupy it:

 
Happy Sitecore Release Day! ๐Ÿ‘

Friday, July 31, 2020

Generate Google Lighthouse Reports with Docker using PowerShell



While browsing Docker Hub, I came across this nifty Google Lighthouse Docker image (by Jay Moulin) which allows you to execute a Lighthouse audit against a given URL in a containerized application - made possible by the Google Chrome Headless Docker base image.  From a practical standpoint, this feels more reliable than running Lighthouse in the Chrome browser where extensions and other variables can easily interfere with the results of the audit. 

You can check out the Dockerfile for this image here: 

Consuming it is pretty straightforward.  With Docker installed and running while switched to Linux containers, two commands are all you need:


Additional options for the Lighthouse audit, like controlling the emulated device form factor (mobile vs. desktop), controlling the throttling method (devtools, provided, simulate), or defining specific categories (Accessibility, Best Practices, Performance, PWA, or SEO) can be included after the URL.
 
However, that's quite a bit of text to remember, and memorizing a bunch of Lighthouse CLI options are not something I see myself doing.  ๐Ÿ˜‹ 

BUT - we can make this tool more approachable by wrapping it in a PowerShell script. ๐Ÿ˜

The name of the game is simplicity: execute .\LighthouseReport.ps1 from a PowerShell terminal, pass in a URL/standard Lighthouse options, and let it run. 


๐Ÿ‘จ‍๐Ÿ’ป A Little PowerShell

In a new PowerShell file, we'll add a mandatory string parameter called $Url
We'll also include non-mandatory string parameters:
  • $FormFactor
    • Valid options for the '--emulated-form-factor=' flag are 'none', 'desktop', or 'mobile'. 

    • Default value when no parameter is provided will be 'desktop'

  • $Throttling
    • Valid options for the '--throttling-method=' flag are 'devtools', 'provided', or 'simulate'. 

    • Default value when no parameter is provided will be 'provided.'

  • $Categories (array of strings)
    • Valid options for the '--only-categories=' flag are 'accessibility', 'best-practices', 'performance', 'pwa', 'seo'. 

    • Default value when no parameter is provided will a comma-delimited string of all applicable categories 'accessibility,best-practices,performance,pwa,seo'

  • $DestinationPath
    • The local path to where the report will be 'dropped.' (used as a volume mapping to the container's '/home/chrome/reports' directory)

    • Default value when no parameter is provided will be "C:/lighthouse"

We'll add the docker pull command for femtopixel/google-lighthouse first.  During the initial execution of the script,  all required images will be downloaded from Docker Hub.  If your image becomes stale or a newer version is available, this will automatically update the image. 

Then add the docker run command with the -v flag to mount a volume to map the local $DestinationPath to the /home/chrome/reports directory on the container. Include the $URL parameter at the end, and all options following:


When the docker run command is executed, Docker will take over, and Lighthouse will begin to execute on the container. Once completed, a .html file will be available in the $DestinationPath

To take it a step further, we can open the $DestinationPath in Windows Explorer by using an Invoke-Item command:


If we want to open the .html report, we can set the PowerShell location to the $DestinationPath, followed by an Invoke-Item where we pass in Get-ChildItem latest .html file.


Simple - yet effective!

๐Ÿ Final Script


⌨ Example Usage

Desktop form factor auditing all categories:


Desktop form factor auditing Best Practices, Performance and SEO only:


 
Mobile form factor auditing Performance only:


 
  

๐Ÿ’ก TIP: When setting a parameter (-FormFactor, -Throttling, -Categories), you can use Ctrl+Space to display valid options and hit enter to select it.



๐Ÿ‘ Result



๐Ÿ™Œ Feel free to grab a copy and modify it to your liking.

Wednesday, July 22, 2020

Approaches to Dockerizing Existing Sitecore Solutions for Local Development


As a developer at a digital agency working in Managed Services, I work with multiple customers spanning multiple versions of Sitecore. The client sites, more often than not, are inherited from vendors outside of reach - each with a unique set of onboarding steps and requirements.

Thursday, May 21, 2020

Part II - Integrating Automated Reverse Azure Database Migration PowerShell Script into Azure DevOps


In my last post, we wrote a handy PowerShell script that takes the latest Master and Web SQL Databases from a Production-level Azure Resource Group and imports them into a Staging/UAT/Dev Azure Resource Group for a seamless reverse database promotion process.  

The original script, however, relies on a developer to run the script manually on a local machine and authenticate their credentials in order to utilize the AzureRm commands:

We can take this script a step further and integrate it as a new stage in the existing Azure DevOps Release Pipeline, or as a new dedicated Release Pipeline that can be executed independently.

In this example, we will create a new Azure DevOps Release Pipeline.  We'll assume a Service Principle connection already exists (which is likely if you're deploying to your App Services using Azure DevOps already) and you have the proper administrator permissions to create pipelines in Azure DevOps.   We'll also be working with an Inline Azure PowerShell script job instead of including a script file from an artifact.  Steps will slightly differ if you want to go that route, but the concept would remain the same. 

Release Pipeline Setup


Head over to the Pipelines > Release dashboard, click the New dropdown and select New release pipeline.


In the 'Select a template' menu, click 'Empty job'.

Modify the Pipeline name, then click on Stage 1 and click the plus sign on Agent job to add a new agent.  Search for 'powershell', find Azure PowerShell task and click the Add button


Set the Azure Subscription to the appropriate service principle, set the Script Type to Inline Script, and set the Azure PowerShell Version to Latest installed version


Save the pipeline and navigate to the Variables section

Variable Setup

Here, we'll add all the variables that we'll consume in the script - allowing for future modification without touching the script code itself.  

In our case, our script calls for the following variables: 
  • - sourceResourceGroupName
  • - sourceSqlServerName
  • - sourceMasterDbName
  • - sourceWebDbName

  • - targetResourceGroupName
  • - targetSqlServerName
  • - targetSqlServerAdminUserName
  • - targetSqlServerAdminUserPassword
  • - targetMasterDbName
  • - targetMasterSqlUserPassword
  • - targetWebDbName
  • - targetWebSqlUserPassword
  • - targetCdServerName
  • - targetCmServerName


Script Modifications


Luckily, our original script doesn't need too much tinkering! Just a bit ๐Ÿ˜‰ 

First, we'll want to remove the Login-AzureRmAccount command altogether since the Azure PowerShell task in the pipeline will authenticate off of the service principle.
 
We'll then replace any hardcoded variables with their new corresponding variables we previously configured throughout the script using the $env:someVariableName format:

We'll finish this off by placing the modified script in the Inline Script field of our Azure PowerShell task.