The smallest Skype for Business front-end server

There were some reasons that took me a week on this project:
– I have few resources on my personal lab (specially storage);
– take my knowledge on OS, Skype4B deployment to the limit.

 If I already ‘get on the nerves’ of customers and colleagues when I request and deploy servers with less then 100GB of HDD,  I can imagine Microsoft with the minimum requirements of 72GB of free disk space (not including the OS?).

My current standard uses a Windows 2012 server with a total of 55GB split between 3 HDD. The Operating System (Drive C) and Skype for Business Front-End (Drive D) take around 36 GB.
skype4b-win2012-size

How much smaller can you have the same Front-end server?  around 18GB (*)
skype4b-win2012r2core-size
(*)not counting the space for IIS logs, Windows Fabric traces and the Page file

The answer for this the same for the question: Can I deploy Skype for Business on a Windows 2012 R2 server core?
Here’s some good reasons to use the Windows server core edition as Microsoft describes:
– less disk space and ram consumption;
– Reduced attack surface (no GUI and less OS vulnerabilities).

In fact the core edition has 98% of the installation prerequisites for Skype for Business Server 2015. On this post I will enumerate the challenges you face if trying to do these. Some are real challenges, others are just glitches of the main Skype4B setup.

Windows Identity Foundation 3.5 (WIF)

This is one prerequisite that you will get an error, and Microsoft KB clarifies that you will not be able to install without installing 4GB of the minimal server interface. All this to get 7 small outdated files that are supposed to be included on the .Net framework 4.5 (included natively on Windows 2012 R2 Server).
In fact that is even described on the OS package:
Microsoft-Windows-Identity-Foundation-Package~31bf3856ad364e35~amd64~~6.3.9600.16384.mum: “Windows Identity Foundation (WIF) 3.5 is a set of .NET Framework classes that can be used for implementing claims-based identity in your .NET 3.5 and 4.0 applications. WIF 3.5 has been superseded by WIF classes that are provided as part of .NET 4.5. It is recommended that you use .NET 4.5 for supporting claims-based identity in your applications.

NOTE: bootstrapper.exe doesn’t validate if WIF is installed on the prerequisites stage. You will only get an installation failure at the package MicrosoftIdentityExtensions.msi.

The workaround is about being able to ‘add-package’  above 😉

IIS Management console

This is a ‘strange prerequisite’. Why do you need the IIS management console snap-in (MMC) to install/run a Skype4B ?
missing-iis-mmc
MMC support is only available installing the  minimal server interface or you will get an error when trying to install it: Add-WindowsFeature Web-Mgmt-Console.
Workaround: just provide the key that bootstrapper looks for by adding the REG_DWORD  value ‘ManagementConsole‘ to the ‘HKLM\SOFTWARE\Microsoft\InetStp\Components‘ key. You can even set to zero (not installed) since it only checks for its existence.

Media Foundation

This is a little more ‘ridiculous’. You can install media foundation on windows core:
dism.exe /online /enable-feature /featurename:ServerMediaFoundation /all
but even it appears as installed on the get-windowsfeatures, the bootstrapper will report missing
missing-mediafoundataion

The reason is that it’s checking for a different installed component: ‘Server-Gui-Shell‘ which is another additional extra to the  minimal server interface

Workaround: add the REG_DWORD value ‘Server-Gui-Shell’ (must be 1) to the ‘HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Server\ServerLevels‘ key.

The last ‘twist’

By this moment you managed to install all the Skype4B front-end components. You managed to start the main service (RTCSRV) but the ones who rely on audio (ex: RTCAVMCU, RTCCAA) and remote data access (ex: RTCDATAMCU). The reason is that 7 required dll’s are only included on the Windows server standard edition:
– DirectX11 and real-time media handlers;
– Remote Access handling.

Workaround: as soon as you get a copy of the missing 7 dll files, you manage to start the remaining Skype4B services and you now have a fully operational Front-end server!

Conclusion

From the description above, the big reason that Microsoft doesn’t support Skype4B on a Windows server core is 7 dll files that are not able to be separated from the install edition.

Off course by now, you can see that this is an option for functional testing in LAB or demos. Microsoft will never support this, even if there is a way to install all the missing parts using the several windows setup command lines available.

The other no-go would be the administration/operations team: There will be a ‘revolution’ if people find out that there was no Windows GUI to manage a server (although you can manage servers remotely with a full GUI ‘management server’.

As a last comment: using the ‘MS-approved’ Windows server, I will let you know that it’s possible to run using a Windows 2012 R2 with a little less than 30GB of HDD.
skype4b-win2012r2-size
…but there’s still room to squeeze a little more 😉

Lync/Skype4B embedded links exploit

I decided to share this MSitPros blog post to show how can you exploit a Lync/Skype4B rich IM, using embedded links with SMB shares.

careful
As stated by the author, exploiting for the NTLM hash might be less successful from an external attacker (SMB traffic blocking), but a rogue LAN user or a deceiving ‘hotspot provider’/’internet cafe’ might would try this one.

Rich text IM (rich fonts, embedded pictures and links) is a very nice feature of Lync/Skype4B but it is also where the common MS Office security issues are found:

  • MS16-039: Security update for Microsoft Graphics Component: April 12, 2016
  • MS16-097: Security update for Microsoft Graphics Component: August 9, 2016
  • MS15-116: Security update for Microsoft Office to address remote code execution: November 10, 2015

Don’t panic right way if you have a full control/security policies of your LAN users, so that no one can just plug a rogue device (or install the required exploit software on his work PC).
The attacker must be able to reach the user – either he has an internal Lync/Skype4B account (which means he already might have hacked the network), or using Company or Skype federation.
Even if the attacker get the hash, the next step is to use against a server resource to access. An external attacker will have an additional challenge to reach your internal LAN.

Just like using Outlook, be careful when opening links or attachments. Better ways to prevent this:
– block links on IM (at least for federations)
– use only the NTLMv2 or Kerberos authentication protocols (although there are known ways to exploit them the same way)

My keynote is that security is an important topic when planning and deploying Lync/Skype for Business… don’t just go for a plain next>next>ready installation.

Call Quality Dashboard – Part 3: The Portal

After describing the Call Quality Dashboard (CQD) QoE Archiving Database and the QoE CUBE, I will show now how to install the Portal component and how it works on the solution.

The CQD Portal is “where users can easily query and visualize QoE data.” synchronized by the Archive and processed by the CUBE.
ic841926The CQD Portal is a IIS based web application that allows you not just visualized but create new reports, views and assign permissions to them. As the above picture shows, it relies on a SQL database to keep all the information.

Installing CQD – Portal

Before performing the installation, the following pre-requisites need to be in place:

  • You need a SQL Databases Services (dedicated or existing) for the setup to install the Portal support database.
  • On the server that will host the Portal you need to install IIS. The following powershell command will install all the required components:
    Add-WindowsFeature Web-Server, Web-Static-Content, Web-Default-Doc, Web-Asp-Net, Web-Asp-Net45, Web-Net-Ext, Web-Net-Ext45, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Http-Logging, Web-Url-Auth, Web-Windows-Auth, Web-Mgmt-Console  -verbose
  • A dedicated domain service account is recommended to can grant the least required privileges. If you installed all the components on the same server you can use the local built-in server account but, if have the SQL Database/Analysis services (CUBE) deployed on a different  servers, the account is required.
  • The QoE Archiving and the CUBE needs to be already deployed.

The installation package is the same for all CQD components so, if: (a) you are installing all components you can go to step 2; (b) if you already installed the QoE Archiving and/or the CUBE on the same server, go to ‘programs and features’ and ‘change’ the package and proceed to step 2:

  1. Proceed throw the welcome screen, licence agreement, and choose the binaries install location:
  2. For this part, I will select the Portal and proceed to the configurations screen:

    Configuration options:
    sqlname-vs-instance QoE Archive SQL Server: SQL Server instance name for where the QoE Archive database is located.
    Cube Analysis Server: SQL Server Analysis Service instance name for where the cube is located.
    Repository SQL Server: SQL Server instance name where the Repository database is to be created.
    IIS App Pool User – User Name & Password: The account that the IIS application pool should execute under and access the other components. You can choose one of the local server services account, otherwise choose ‘Other’ and provide a domain service account credentials (see pre-requisites above explanation).

  3. After the validations the installation will ask to proceed until completion, hopefully without any error 🙂

Behind the CQD Portal

What happened and was configured after the previous installation steps?
This component setup  installed some specific files, created support database and made some updates on the QoE CUBE Database:
• QoERepositoryDb database was created. This database holds the portal all the configurations, customized reports, …
• ‘IIS App Pool User’ login created and assigned db_owner on the QoERepositoryDb
• ‘IIS App Pool User’ login created and assigned db_datareader on the QoEArchive database
• ‘IIS App Pool User’ added to the QoERole on the CUBE database
• IIS default web site configured with 3 folders that matches the directories and files installed.

Known ‘caveats’ regarding the installation and architecture process:

  • In rare cases, the installer fails to create the correct settings in IIS. Manual change is required to allow users to log into the CQD. If users are having trouble logging in, please follow the steps described on ‘know issues’ section of the  TechNet article.
  • Cube Sync Fails – QoEMetrics may contain some invalid records based on end user clocks. If the time skew is greater than 60 yrs, the cube import will fail. Check the Min and Max StartTime/EndTime using the selections below. Look for and delete records in the far past and very distant future, they can be disregarded and they will break up the sync processes.
    Select MIN(StartTime) FROM CqdPartitionedStreamView
    Select MAX(StartTime) FROM CqdPartitionedStreamView
    Select MIN(EndTime) FROM CqdPartitionedStreamView
    Select MAX(EndTime) FROM CqdPartitionedStreamView
  • After deploying the CQD on a new server, you can run into a problem where the Portal was not showing any data and returned a problem saying:
    We couldn’t perform the query while running it on the Cube. Use the Query Editor to modify the query and fix any issues. Also make sure that the Cube is accessible
    In order to solve it, process the CUBE object and make sure it’s accessible as described here.

How to manage and monitor the CQD Portal process

The main portal page is accessible via http://<portalserverFQDN>/CQD.
CQD-Portal-main.png

You probably will not see any data because “when the installer is done, most likely the SQL Server Agent job will be in progress, doing the initial load of the QoE data and the cube processing. Depending on the amount of data in QoE, the portal will not have data available for viewing yet.” To check on the status of the data load and cube processing, go to http://<portalserverFQDN>/CQD/#/Health.
CQD-Portal-health
Or (like my LAB) you don’t have any monitoring data to display :). After that you should see the last successful and failed update status:
CQD-Portal-health-ok

Other configurations that you can perform on the Portal are described on the Deploy CQD TechNet article:

  • Post-install tasks required to have reporting data regarding locations (buildings, networks name, subnets, BSSID)
  • By default, any authenticated user has access. This can be changed by using IIS Authorization rules to restrict to a specific.
  • Detailed log messages will be shown if debug mode is enabled. To enable debug mode, go to [CQD installed Dir]\QoEDataService\web.config, and update the following line so the value is set to True:
    <add key=”QoEDataLib.DebugMode” value=”True” />

And that’s it! you now have CQD fully deployed!
You can now see how the Lync/Skype4b is performing, and even build you own reports. Creating them is tricky, but you can learn some basics here.

<Am I missing something? maybe some more posts about it. provide me some feedback suggestions/requests 😉 >

Call Quality Dashboard – Part 2: The CUBE

After describing the Call Quality Dashboard (CQD) QoE Archiving Database on part 1, I will show now how to install the CUBE component and how it works on the solution.

The CUBE is “where data from QoE Archive database is aggregated for optimized and fast access” by the Portal component: this is the ‘data crusher’
ic841926

The CUBE is a SQL Server Analysis Service (SSAS) or generically known as an online analytical processing (OLAP).

Installing CQD – QoE CUBE

Before performing the installation the following pre-requisites need to be in place:

  • You need a server with SQL Server Analysis Services (SSAS) installed. The following picture  (all-in-one example) shows the required SQL components for CQD installationsic797717
  • It’s recommend to create a dedicated domain service account to grant the least required privilege to it. This account is used to trigger the cube processing.
  • The QoE Archiving Database needs to be already deployed.
  • You need to run the installation on the SQL server where the QoE Archive Database was installed. This is because some files will be installed and used by the SQL Agent.

The installation package is the same for all CQD components so, if: (a) you are installing all components you can go to step 2; (b) if you already installed the QoEArchiving on the same server, go to ‘programs and features’ and ‘change’ the package and proceed to step 2:

  1. Proceed throw the welcome screen, licence agreement, and choose the binaries install location

  2. For this part I will select the QoE CUBE and proceed to the configurations screen

    Configurations options:
    sqlname-vs-instance• QoE Archive SQL Server Instance: SQL Server instance name for where the QoE Archive DB is located. To specify a default SQL Server instance, leave this field blank. To specify a named SQL Server instance, enter the instance name
    • Cube Analysis Server: SSAS server and instance name for where the cube is to be created. This can be a different machine but the installing user has to be a member of Server administrators of the target SSAS instance.
    • Use Multiple Partitions: ‘Multiple partitions’ requires Business Intelligence edition or Enterprise edition of SQL Server. ‘Single Partition’ only requires for a Standard edition, but cube processing performance may be impacted.
    • Cube User – User Name & Password: Domain service account that will trigger the cube processing.

  3. After the validations the installation will ask to proceed until completion, hopefully without any error:)

    Behind the CQD QoE CUBE

    What happened and was configured after the previous installation steps?
    This component setup  installed some specific files, created a SSAS database and made some updates on the QoE Archiving Database:
    • QoECube database was created;
    • ‘Cube User’ login created and assigned db_datareader and db_datawriter on the QoEArchive
    • a credential created with the ‘Cube User’. This will be used to impersonate the connection to the QoECube to the source SSAS server.
    • A linked server source, mapping all the databases on the source SQL server
    • A 2nd step on the SQL Agent Job (created by the QoE Archive) and a proxy. This is the ‘brain’ that will trigger the cube.
    • The files used by the agent to trigger the cube

Known ‘caveats’ regarding the installation and architecture process:

  • The script command ‘process.bat’ to trigger the cube process overwrites the error log ‘process.log’ at every execution. Since the Agent execution is ran every 15 minutes you might not catch a cause/history of past errors.
    As quick workaround, you can change the script command to pipe and add (>>) the output to the existing log file:
    “%~1QoECubeService.exe” “%~1cubeModel.xml” >> “%~1process.log”
  • Don’t use a domain user account password starting with ‘+’. The setup SQL procedure will ignore it and then you will get the following on the SQL job and the cube trigger will not start:
    “Unable to start execution of step 1 (reason: Error authenticating proxy LAB\service.cube, system error: The user name or password is incorrect.).  The step failed.”

How to manage and monitor the CQD QoE CUBE process ?

The main CUBE processing is triggered using the same SQL Agent job created by the QoE Archiving. A second step is added to the job and whenever there is new data synchronized from the QoEMetrics to the QoeArchive, the job will launch a command script:
CQD-CUBE-SQLAgentExecution errors will be logged on the SQL agent log and details can be found on the file ‘process.log’ generated on the same folder as the command script.

Now you have a replica of your QoE data, a tool to process analyse it. You now need an interface to visualize and modulate described on part 3.

And finally…

There is a way to script the previous installation in one single command line (you just need to replace the orange text with your settings):

Msiexec /i “CallQualityDashboard.msi” ADDLOCAL=QoECube REBOOT=ReallySuppress CQD_INSTALLDIR=”D:\Skype4B\CQD” CUBE_ARCHIVE_SERVER=”LYNC-CQD.my.lab\CUBE” DISABLE_CUBE_MULTIPLE_PARTITION=”true” CUBE_ANALYSIS_SERVER=”LYNC-CQD.my.lab\CUBE” CUBE_USER=”LAB\service.cube” CUBE_PASSWORD=”WhoKnows?/qb!

  • You still need to run this it on the server holding the QoE Archiving database (it needs to install the agent script files)
  • Be sure to use lowercase ‘true’ or ‘false’ on the parameter.
    It will write ‘as is’ this value on the cubeModel.xml file, and the Agent job will fail and you will see an error on the ‘process.log’:
    Error while Processing: There was an error deserializing the object of type Microsoft.Rtc.Qoe.Cqd.QoECubeService.CubeProcessModel. The value ‘True’ cannot be parsed as the type ‘Boolean’.
    You can fix this by ‘lowercasing’ the value of the parameter <DisablePartitioning> on the cubeModel.xml

Call Quality Dashboard – Part 1: The QoE Archive Database

Overview of Call Quality Dashboard (CQD)

The QoE database is replicated to another SQL database (named the ‘QoeEArchive’) and then is manipulated throw a user web portal using a SQL Analysis Service (CUBE). The CQD is composed of 3 components: The QoE Archive DB, The Cube and the Portal.

You can read more information on the TechNet article: ‘Plan for Call Quality Dashboard for Skype for Business Server 2015’.
These article also components can be installed in one single server, or distributed up to 3 (I say that can go to 4) servers.

Inspired on this I decided to split the subject in three posts: how to install, and also how each element works. Besides it’s easier to read, it allows you to understand how to deploy on a multiserver or single server.

Installing CQD – QoE Archiving Database

As seen on the above picture, the QoE Archive is a database with some procedures that replicates the data from a Lync/Skype4B ‘QoEmetrics’ database.
What do you need as pre-requisites to install this:

  • A SQL database service (recommended a dedicated one)
    You need the Enterprise or Business Intelligence edition if you to use ‘multiple partitions’ which allow better CUBE processing performance for large amounts of data
  • The SQL agent service must be running (automatic startup) on that SQL server. An agent job will be running periodically to replicate data. If
  • An account with db_datareader role/permissions on the QoEmetrics database
    CQD-QoE-DBuser
    This account will also be granted db_owner on the QoEArchive and it will be impersonated (proxy) to connect to the QoEmetrics.
  • You must run the install package on the SQL server where you want to install the Archive database. The setup reads this info from the local system and doesn’t allow you to change ( using the GUI 😉 )

After downloading the CQD package, the setup process is the following:

  1. Proceed throw the welcome screen, and choose the binaries install location
  2. For this part I will just select the QoE Archive (deselect the others)

    Configurations options:
    sqlname-vs-instance• QoEMetrics SQL Server: SQL Server and instance name where the QoE Metrics database is located.
    • QoE Archive SQL Server Instance:the A local SQL Server instance name for where the Archive DB is to be created. Leave this field blank for a default SQL setup.
    • QoE Archive Database: create a new or use an existing one (useful for recovery/migration/connect new source scenarios -it will rebuild the ACL’s, connectors and jobs-)
    • Database File Directory: location where the new database files are to be created. Recommended a separate disk volume.
    • Use Multiple Partitions: ‘Multiple partition’ requires Business Intelligence edition or Enterprise edition of SQL Server. ‘Single Partition’ only requires for a Standard edition, but cube processing performance may be impacted.
    • Partition File Directory: (if using ‘Multiple partition’) Path to where the partitions for the QoE Archive database should be placed.
    • SQL Agent Job User – User Name & Password: Domain service account used to connect to the QoEmetrics database and replicate on the QoEArchive

  3. After the databases, instances and account access validation the installation will ask to proceed until completion, hopefully with any error 🙂
    CQD-setup-Ready_CQD-setup-ArchiveCompleted

Behind the CQD QoE Archive Database

What happened and was configured after the previous installation steps?
This component setup didn’t installed any specific binaries. The installation was in fact a series of configurations on the SQL server used for the CQD Archive database:

  • QoEArchive database was created
  • ‘SQL Agent Job User’ login created and assigned db_owner of the QoEArchive
  • a credential created with the ‘SQL Agent Job User’. This will be used to impersonate the connection to the QoEMetrics on the source SQL server
  • A linked server source, mapping all the databases on the source SQL server
  • A SQL Agent Job and proxy. This is the ‘heart’ that will sincronize the QoEMetrics and the QoEArchive

Known ‘caveats’ regarding the installation and architecture process:

  • Both database and transaction log files are going to be installed on the same folder. You can only change this after using SQL tools and procedures.
  • Not 100% sure about this (need to investigate this one), but I couldn’t find documented support for a QoEMetrics mirrored database.
    If the database fails to the other node the synchronization process fails.
  • Don’t use a domain user account password starting with ‘+’. The setup SQL procedure will ignore it and then you will get the following on the SQL job and the data will not get replicated:
    “Unable to start execution of step 1 (reason: Error authenticating proxy LAB\service.CQD, system error: The user name or password is incorrect.).  The step failed.”
    You can solve this by manually setting the correct password on the ‘QoEArchiveCredential’

How to manage and monitor the CQD QoE Archive process ?

As I told before the QoE Archive is a data synchronization process between the Lync/Skype4B QoEmetrics database and the QoEArchive.
This is done using a SQL agent job that runs, by default, every 15 minutes:
CQD-archive-agentjob

This ‘simple’ job triggers a series of store procedures and will sync the databases tables.
You can see the sync jobs status and errors on a particular table. If you open the tables on the QoEMetrics and QoEArchive, you will confirm that (the second one will have some more tables that are used to control the sync process:

I used the word ‘DB synchronize/replication’ to simplify the idea. In fact, it does what the name means: ‘collects data and add to the existing archive’. “CQD’s QoE Archive database provides a second copy of the QoE Metrics data with much longer retention capabilities”.

If you have multiple Skype4B pools, each with its own Monitoring Server, “CQD does not merge data from multiple QoEMetrics databases!”. “Each CQD instance must point to one QoEMetrics database!”.(*)
“However, because CQD will move much of the reporting workload off of the Monitoring Server, large organizations that deployed one Monitoring Server per Skype4B Pool topology should consider using one Monitoring Server for all topologies”.
But this can compromise using the Monitoring Reports tool to analyse (older) data in a different way and doesn’t handle the other bigger and heavier monitoring database: the LcsCDR. – This is an open topic for a future blog 🙂

You can monitor the replication process, not just by the Agent Job logs, but there is also a table that contains the history. For example the Agent job will report an error if there is no new data to replicate from and you can only see that here:
CQD-archive-logs.png

Now you have a replica of your QoE data to analyse with the tools described on part 2.

Wait ! eastern egg!

For those who manage to read until here without falling asleep, here’s a ‘gooddie’.
Here’s how can automate the previous setup from the command line (you just need to replace the orange text with your settings):

Msiexec /i CallQualityDashboard.msi ADDLOCAL=QoEArchive REBOOT=ReallySuppress CQD_INSTALLDIR=”D:\Skype4B\CQD” QOE_METRICS_SQL_SERVER=”LYNC-BE.my.lab\INST1” ARCHIVE_SQL_SERVER=”LYNC-CQD.my.lab\CUBE” INSTALL_NEW_ARCHIVE=True ARCHIVE_FILE_DIRECTORY=”E:\Databases\CQD” DISABLE_ARCHIVE_MULTIPLE_PARTITION=True ARCHIVE_SQL_AGENT_USER=”mydomain\cqdserviceaccount” ARCHIVE_SQL_AGENT_PASSWORD=”itsAsecret/qb!

The interesting part is that (for SQL standard/single partition deployments) you can run this setup command from another server that is not the CQD SQL database one (as long as you have the SQL client tools installed on the one you run).

 

Call Quality Dashboard: built-in reports

6460fig7The Call Quality Dashboard (CQD) is a new feature released at the same time as Skype for Business 2015 but that also works with Lync 2013. In simple words it gives you a visual overview of your QoE/monitoring data.

It doesn’t have nearly the feature set of paid products like EventZero (that Microsoft bought on Jan 2016) or IR Prognosis but, it’s free and certainly can insight the standard monitoring reports.

It’s a powerful application that allows you to create your own reports and it contains 44 built-in reports. This post is about sharing the hierarchical listing of those reports that you will find right after you finish the installation (I will start tomorrow posting about the CQD architecture, components and installation).

For now it’s just a dump of the headers and description but, as soon as I start getting some nice graphics, I will update this post.

1. Audio Streams Monthly Trend (Managed vs Unmanaged Audio Streams)

This Report shows the monthly audio streams count, poor count, and poor ratio for the last 7 months. There are no filters applied so the data is what is contained in the QoE Database. Audio calls made over wireless and external networks can cause poor call rates to go up. To find the root cause of the poor calls, drill into the data by clicking on the title of the report!

1.1. Managed Audio Streams Monthly Trend

The Managed bucket contains audio streams made by servers and clients on wired corporate network connections. Any poor streams seen here need investigation. Click the report title to drill down!

1.1.1. Server-Server

The Server-to-server Audio Streams Report provides a good baseline for your Managed network environment. The percentage of poor calls using the ClassifiedPoorCall measure is expected to be below 0.5%.

1.1.1.1. Server-Server Monthly Trend

This Report is a copy of the Parent Report and is included here as a reference. The Y-axis scale is fitted to the call volume for Wired-Wired-Inside calls so month-so-month changes are more visible here than in the Parent Report.

1.1.1.2. Server-Server Daily Trend

This Report shows the server-to-server audio streams by day. It has the same filter condition as the Monthly Trend Report.

1.1.1.3. Server-Server by Transport Type

Audio streams between servers should only use UDP. Any TCP streams are not expected and should be investigated. If there is a high percentage of poor TCP streams, it could explain the poor streams in the Server-Server scenario.

1.1.1.4. Server-Server by Server Type Pairs

This Report shows the Poor call distribution among the server user agent type combinations. Each combination represents a specific network path and server endpoint health. The Gateway server type can include SBC providers. Click the title to see a breakdown by GW endpoint names!

1.1.1.4.1. Mediation Server-Gateway Audio Streams

This Report is a copy of the Parent Report except is also includes a filter for just the Mediation Server-Gateway calls. It is included here as a reference.

1.1.1.4.1 Server-Server by Server Location City Pairs

If the servers are generally located in different cities, this Report can show potential network issues in the network path between different locations. The City column requires IT-supplied subnet IP-to-Network-to-City mapping data to be entered into the QoEArchive database.

1.1.2. Server-Wired-Inside

The Server-to-client-wired-inside Report is used to monitor the health of the network paths between the clients and servers.

1.1.2.1. Server-Wired-Inside Monthly Trend

This Report is a copy of the Parent Report and is included here as a reference. The Y-axis scale is fitted to the call volume for Wired-Wired-Inside calls so month-so-month changes are more visible here than in the Parent Report.

1.1.2.2. Server-Wired-Inside by Client Transport Type

Audio streams on the corporate intranet should only use UDP. Any TCP streams are not expected and should be investigated. If there is a high percentage of poor TCP streams, it could explain the poor streams in the Server-Wired-Inside scenario. Click the title of the report to drill down!

1.1.2.2.1. Server-Wired-Inside by Client Transport

This Report is a copy of the Parent Report and is included here as a reference.

1.1.2.2.2. Server-Wired-Inside (TCP) by Client Endpoint

This Report shows all the client endpoints that have reported TCP streams. The rows are sorted by Count of Good streams descending.

1.1.2.3. Server-Wired-Inside by Server Type  

This Report shows the server-to-client-wired-inside calls by Server Type. It can show problems due to server config that are not captured by the Server-Server Reports. Investigate servers that have higher poor call rates than others as well as servers that show sudden increase in poor call rates.

1.1.2.4. Server-Wired-Inside by Client Connectivity ICE 

Audio streams on the corporate intranet should only use UDP. Any TCP streams are not expected and should be investigated. If there is a high percentage of poor TCP streams, it could explain the poor streams in the Server-Wired-Inside scenario. Click the title of the report to drill down!

1.1.2.4.1. Server-Wired-Inside by Client Transport 

This Report is a copy of the Parent Report and is included here as a reference.

1.1.2.4.2. Server-Wired-Inside (TCP) by Client Endpoint 

This Report shows all the client endpoints that have reported TCP streams. The rows are sorted by Count of Good streams descending.

1.1.2.5. Server-Wired-Inside by Client Building  

If Subnet IP-to-Network and Building mappings are populated in the QoEArchive database, this Report will light up with the server-to-client-wired-inside call data broken down by the client endpoint’s Building Name. This is a very powerful way to compare Poor Call Rates for all buildings.

1.1.2.6. Server-Wired-Inside by Client Type  

This Report shows the server-to-client-wired-inside calls by Client User Agent Type. It can show problems due to QoS configuration since that can be applied based on client executable name.

1.1.2.7. Server-Wired-Inside by Client Network Type  

The Network Type is another IT-supplied data set that allows the network subnets to be tagged with IT specific context. For example: “LabNet”, “Wifi”, “Wired”, “DataCenter”, and “Vendor” are all possible classification values. This allows cross checking the IT-supplied values can be compared to client OS observed values for the Network Connection Detail.

1.1.3. Wired-Wired-Inside

The Wired-Inside-Client-to-Wired-Inside-Client Report is used to monitor the health of point-to-point calls that do not involve server endpoints. The network path that these calls take are usually different from server-client calls.

1.1.3.1. Wired-Wired-Inside Monthly Trend 

This Report is a copy of the Parent Report and is included here as a reference. The Y-axis scale is fitted to the call volume for Wired-Wired-Inside calls so month-so-month changes are more visible here than in the Parent Report.

1.1.3.2. Wired-Wired-Inside Daily Trend  

This Report is shows the daily trend of the count and poor call rate measures for the current month.

1.1.3.3. Wired-Wired-Inside (OCPhone-OCPhone) Daily Trend  

This Report shows just the subset of client-wired-inside-to-client-wired-inside calls where both endpoints are IP Phones. This should represent the best possible scenario for wired and inside calls. Poor call rates < 0.1% are not unexpected.

1.2. Unmanaged Audio Streams

The Unmanaged bucket contains audio streams made by clients on wireless networks, public networks, or home networks. Some amount of poor streams are expected. However, a worsening trend of poor call rates warrants investigation. Click the report title to drill down!

1.2.1. Server-Wifi-Inside

The Server-to-client-wifi-inside Report is used to monitor the health of the corporate wifi network.

1.2.1.1. Server-Wifi-Inside Monthly Trend

This Report is a copy of the Parent Report. It is included here as a reference.

1.2.1.2. Server-Wifi-Inside – Best Subnets

This Report shows call quality over enterprise wifi network for each client subnet IP address. If subnet ip address-to-network name mapping is entered in the QoEArchive database, then this report can be changed to group by client building name instead of subnet IP address.

1.2.1.3. Server-Wifi-Inside – Worst Subnets 

This Report is similar to the previous Report except it is sorted from worst Poor Call Percentage to best.

1.2.1.4. Server-Wifi-Inside by Client Wifi Chipset  

wifi chipset

1.2.2. Server-Wired-Outside  

The Server-to-client-wired-outside Report is used to monitor the health of the network path from the servers to the internet edge. Changes in Poor Call Rates month-to-month should be investigated.

1.2.3. Server-Wifi-Outside 

This Report is used as comparison to the Server-Wired-Outside Report.

1.2.4. Wired-Wired-Outside-DIRECT 

This Report shows poor call quality when 2 client endpoints are connected directly. It is used in conjunction with the Wired-Wired-Outside-RELAY report to identify any potential Media Relay Edge or datacenter edge issues.

1.2.5. Wired-Wired-Outside-RELAY 

This Report shows poor call quality when 2 client endpoints are connected through one or more Media Relay Edge servers. An increase in poor call percentage should be investigated.

1.2.5.1. Wired-Wired-Outside-RELAY 

This Report is a copy of the Parent Report. It is included here for reference.

1.2.5.2. Wired-Wired-Outside-Relay By Relay IP Address 

This Report shows the client-outside-wired-to-client-outside-wired calls that used one or more Media Relay Edge Servers. The data is broken down by one client’s Relay Server IP Address. There could be more than one Relay in the call but pivoting on just one can give a sampling of the relative call quality across the Relay servers. This Report also demonstrates the use of browser-side filtering of the results to remove any rows that do not contain more than one good stream.

1.2.6. Wired-Wired-Outside-Other 

This Report shows poor call quality when 2 client endpoints are connected not directly or by a relay. It is used in conjunction with the Wired-Wired-Outside-RELAY, Wired-Wired-Outside-DIRECT reports to identify any potential Media Relay Edge or datacenter edge issues.

1.2.7. Other Unmanaged Calls 

This Report captures the Unmanaged audio streams that do not belong to any of the other Unmanaged Scenarios. For example, Wifi-Wifi calls would be represented in the Report.

1.3. Other (Invalid Report)

The Other bucket contains audio streams that cannot be classified as Managed or Unmanaged. Classification of streams into Managed or Unmanaged requires the network connection type and access location and the data must be reliable. Endpoints that do not send QoE reports will be classified into the Other bucket. The StreamType.StreamType dimension has a value of ‘false’ if the stream cannot be classified.

1.3.1. Other (Invalid Report)  

This Report is a copy of the Parent Report.

1.3.2. Other (Invalid Report) by User Agent Types 

This Report contains Server-to-client calls grouped by the client User Agent Type.

2. User-reported Call Quality Rating Histogram

This Report shows the count of each of the possible User-collected rating. The possible values are 1 – 5 with 5 being the best and 1 being the worst. The rating values are only collected via Skype for Business Clients.

2.1. User-reported Call Quality Rating Monthly Trend 

This Report shows a monthly trend of the count of each of the possible User-collected rating. The possible values are 1 – 5 with 5 being the best and 1 being the worst. The rating values are only collected via Skype for Business Clients.

 

Taking control of the rtcReplicaRoot folder

xds-replica-wrongWhen you use the setup (or migration) assistant, you know that you cannot control several installation locations, like the databases and specially the xds-replica folder.

I learned myself, since Lync 2010, how to control  the install location by performing a manual setup of part of the components (see this post at step 10).
But if you are performing a Skype4B inplace upgrade, the assistant will use remove the previous version of the replica service and install the new using the default logic.
If you have (like me) multiple volumes on your Windows server you might have this folder where you don’t want it (like in a dedicated pagefile or SQL data volumes).

If you didn’t find the logic of this install location, here’s the only MS documentation reference note about it:
During the upgrade process the xds-replica is placed in the local shared folder on the disk drive with the most free space. If that disk is later removed then you can run into issues such as services not starting.

Let’s skip the discussion of why you need the emptiest volume for a small size directory structure and concentrate on the main issue:

How can I move the rtcReplicaRoot folder?

You can google-foo and find some references (here and here) how can you manually tweak the folder, shares, acl’s and do some registry changes.
But this has some inconvenients: the uninstallation of the component will probably fail/generate errors. This will complicate an upgrade/patching process and requires you again, to manually fixed it.

Using the ocscore.msi setup package is also a big challenge:
• the REPLICA agent service is inside the ‘Lync/Skype4B core components’. if use the ‘programs/features’ to uninstall them (if it allows) it will break all the other components;
• if you manage to find out the specific uninstall switch, it will -by default- drop the local XDS database (and loose the local topology reference and the local certificates in use;
• also a new installation can overwrite the existing XDS database with an empty one.

By using the undocumented setup switches, you can effectively remove and control the setup of the rctReplicaRoot on a specific folder. This procedure has 3 great advantages:
•  it’s a standard MSI supported installation – no disruption for patching or upgrades;
•  doesn’t require to apply the latest patches, since it uses the local server MSI cache;
•  It can be done without stopping the main Lync/Skype4B services 🙂

Skype for Business Server 2015

The process was greatly simplified by including two tiny switches to allow future upgrades (unlike previous versions of Lync):

  1. stop the related services (via powershell)
    Stop-CsWindowsService REPLICA
    Stop-CsWindowsService RTCCLSAGT
  2. Uninstall the related component services
    MsiExec.exe /i {DE39F60A-D57F-48F5-A2BD-8BA3FE794E1F} KEEPDB=1 REMOVE=Feature_LocalMgmtStore REBOOT=ReallySuppress /qb!
    This will remove all the related service components, rtcreplicaroot folder, share and ACL’s
  3. Install the component services
    Msiexec /i {DE39F60A-D57F-48F5-A2BD-8BA3FE794E1F} ADDLOCAL=Feature_LocalMgmtStore SKIP_DB=1 REPLICA_ROOT_DIR=”[fullpathto_rtcreplica_folder]” REBOOT=ReallySuppress /qb!This will install all the related service components, the rtcreplicaroot folder on the desired location, create the share and set ACL’s and registry entries.
  4. Enable the local replica service (via powershell)
    Enable-CsReplica
  5. start the related services (via powershell)
    Start-CsWindowsService REPLICA
    Start-CsWindowsService RTCCLSAGT

Lync Server 2013

The setup package was not designed for this particular task:
• The install will overwrite any existing XDS with a new/empty one
• The uninstall will drop/delete existing XDS

In fact that Skype for Business inplace upgrade assistant was designed to handle especially this situation, by using the existing utility (InstallCsDatabase) that manage the local databases:

  1. stop the related services (via powershell)
    Stop-CsWindowsService REPLICA
    Stop-CsWindowsService RTCCLSAGT
  2. Detach the XDS database (to avoid the uninstall from deleting it)
    “%CommonProgramFiles%\Microsoft Lync Server 2013\DbSetup\InstallCsDatabase.exe” /Detach /Feature:CentralMgmtStore
  3. Copy the database files (xds.mdf and xds.ldf) to a safe location
  4. Uninstall the related component services (elevated command prompt rights)
    MsiExec.exe /i {8901ADFC-435C-4E37-9045-9E2E7A613285}  REMOVE=Feature_LocalMgmtStore REBOOT=ReallySuppress /qb!
    This will remove all the related service components, rtcreplicaroot folder, share and ACL’s
  5. Install the component services  (elevated command prompt rights)
    Msiexec /i {8901ADFC-435C-4E37-9045-9E2E7A613285} ADDLOCAL=Feature_LocalMgmtStore REPLICA_ROOT_DIR=”[fullpathto_rtcreplica_folder]” REBOOT=ReallySuppress /qb!This will install all the related service components, the rtcreplicaroot folder on the desired location, create the share and set ACL’s and registry entries.
  6. Drop the empty XDS database (created on step 5)
    “%CommonProgramFiles%\Microsoft Lync Server 2013\DbSetup\InstallCsDatabase.exe” /Drop /Feature:CentralMgmtStore
  7. Copy back the database files (xds.mdf and xds.ldf) from step 3
  8. Attach the XDS database
    “%CommonProgramFiles%\Microsoft Lync Server 2013\DbSetup\InstallCsDatabase.exe” /Attach /Feature:CentralMgmtStore
  9. Enable the local replica service (via powershell)
    Enable-CsReplica
  10. Start the related services (via powershell)
    Start-CsWindowsService REPLICA
    Start-CsWindowsService RTCCLSAGT

Notes about these commands and procedures

  • The uninstall will prompt you with a warning regarding active core components services. You can safely confirm this action has the main core components are kept.
  • You need to run the Msiexec and InstallCsDatabase with an elevated command prompt
  • InstallCsDatabase is case sensitive on some the parameters (/Feature:)
  • Feature_LocalMgmtStore – is the feature name identifier inside the ocscore.msi package
  • KEEPDB=1 will prevent the uninstall to drop the XDS database
  • SKIP_DB=1 will prevent the setup to overwrite and use any existing XDS database
  • REPLICA_ROOT_DIR will tell the setup it will create the  ‘xds-replica’ folder inside the define path (I usually use a subdirectory inside the installation of Skype4B)
  • You can use the PS commands to check if the local replica service is working properly (UptoDate=true)
    Invoke-CsManagementStoreReplication -ReplicaFqdn <your FE server FQDN>
    Get-CsManagementStoreReplicationStatus -ReplicaFqdn <your FE server FQDN>

Congratulations !

You now have control of your xds-replica rtcReplicaroot folder 🙂

Continue reading