spfx error: No development certificate found. Generate a new certificate manually, or set the `canGenerateNewCertificate` parameter to `true` when calling `ensureCertificateAsync`

When using the command gulp serve, you could receive the below error:

No development certificate found. Generate a new certificate manually, or set the `canGenerateNewCertificate` parameter to `true` when calling `ensureCertificateAsync`

To resolve, you can type gulp trust-dev-cert.

Deploy a PCF NodeJS app as a scheduled task

I have a NodeJS app that runs as a process and that executes a task every 15 minutes using node-schedule.

We first need a manifest.yml file that contains:

---
applications:
- name: APP-NAME
  buildpack: nodejs_buildpack
  no-route: true
  health-check-type: process
  env:
    OPTIMIZE_MEMORY: true

The no-route parameter is true so that we don’t get a route assigned, and the health-check-type is set to process so that the orchestrator monitors process availability and doesn’t try to ping a non-existent web endpoint. And OPTIMIZE_MEMORY in “env” section is based on the Pivotal recommendations.

If you need to use a local package in your app, you’ll have to pack it up first. To do it, go to your local module folder, and type npm pack. It will create a .tgz file that you’ll have to store in a local_modules folder for your app. Next, use npm install .\local_modules\package-1.2.3.tgz.

You can now deploy your app with pcf push APP-NAME and you can read the logs with cf logs APP-NAME --recent.

Power Automate: execute a SQL Query via On-Promise Gateway

In Power Automate, when you want to connect to a SQL Server and if you have a On-Promise Gateway, then you cannot use the command “Execute a SQL Query” because it will say it’s not currently supported.

There is a workaround with “Transform data using Power Query” (ATTENTION: you cannot load it from a flow from a Solution… you’ll have to go to your Flows and edit the flow from there):

Let’s say we have 3 tables: ITEM_CATALOG, CATALOG and CURRENCY. We want to join them and filter them based on a variable found previously in our flow.

First, we can define our where. Here I have several values that I want to test using a IN:

I create a string with my different values separated by a coma.

Next, we can open the Power Query editor:

In the interface, we choose the 3 tables we need to merge and we add a parameter called “where”:

We rename it to “where” and leave the default settings:

Then we use the “Advance Editor”:

And we wrote the below:

1
2
3
4
let
  where = Text.Split( "@{variables('where')}" , ",")
in
  where

It means we want to split the variable “where” coming from the flow, based on the coma separator:

We can now merge the tables and add a filter:

And when the step to filter is here, we select “in” and our query:

Last step is to “Enable Load” to make sure this is what the operation will return to our flow:

You can run it to test and see if it works.

Then, to get the output from it, we’ll use a “Parse JSON”… The schema is probably something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
    "type": "object",
    "properties": {
        "resultType": {
            "type": "string"
        },
        "value": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "COLUMN_A": {
                        "type": "string"
                    },
                    "COLUMN_B": {
                        "type": "integer"
                    },
                    "COLUMN_C": {
                        "type": "string"
                    }
                },
                "required": [
                    "COLUMN_A",
                    "COLUMN_B",
                    "COLUMN_C"
                ]
            }
        }
    }
}

You may need to make several tries in order to find the correct schema. You can also use the “Generate from sample” by pasting the data from the previous step:

We use “value” in the loop:

And then we can access our columns:

Pass an URL parameter to a SharePoint Online form’s field

The only way to pass a URL parameter to a SharePoint Online (modern design) form’s field is to use PowerApps (at least, if you cannot add any JS on your website!).

Important warning: when you use PowerApps to manage your form, all edits to the list settings won’t reflect to the PowerApps form. For example, if you add a field, it won’t show up, and you’ll have to manually update the PowerApps to add it (see at the bottom of this article).

From the list view, go to Integrate then PowerApps and Customize forms:

Once PowerApps has open the form, you need to do several things.

1. Load the ID

We first need to make sure the form will load the required item when we pass the ID URL parameter:

From the SharepointForm Advanced Settings, we change the DefaultMode to check if we have the ID parameter, and if we don’t have it, then it should be a New Form, otherwise an Edit Form:

1
If(IsBlank(Param("ID")), FormMode.New, FormMode.Edit)

From the SharepointForm Advanced Settings, we change the Item section to check if we have the ID parameter, and if we have it, then we do a lookup in our list to find the data:

1
If(IsBlank(Param("ID")), SharePointIntegration.Selected, LookUp('NAME OF THE LIST', ID = Value(Param("ID"))))

Add a SUBMIT button

With PowerApps, there is no button to save the changes! We’ll add a button in the form:

In the button’s properties, we change the onSelect to be:

1
SubmitForm(SharePointForm1)

Be aware that the page will stay with the form after clicking on the button. You could want to close using the Exit() function, but the user will be redirected on office.com … I’d recommend to use Launch() by redirecting your users to a page:

1
SubmitForm(SharePointForm1); Launch("https://tenant.sharepoint.com/sites/MySite/");

Set field’s value based on URL parameter

We can finally set the field’s value based on the parameter in the URL. Select the INPUT zone of the field, and in the Default section we use the below formula:

1
If(IsBlank(Param("Title")), ThisItem.Title, Param("Title"))

Here my field is called “Title” so I decided to use a parameter called “Title” as well.

Link to the form

We cannot use the NewForm.aspx or EditForm.aspx to access this form, but we need a special link.

Go to your list settings:

Then go to the form settings (it’s from there that you can decide to keep PowerApps or use the original Sharepoint Forms), and click on See versions and usage:

You’ll get the App Id from this page:

Next, you’ll use the App Id to forge the URL: https://apps.powerapps.com/play/providers/Microsoft.PowerApps/apps/APP_ID
With our example, the URL will be: https://apps.powerapps.com/play/providers/Microsoft.PowerApps/apps/c6f23ac1-dcbd-4daf-925e-2701ab241ca0

You can now pass the URL parameter: https://apps.powerapps.com/play/providers/Microsoft.PowerApps/apps/APP_ID?Title=Hello%20World
And an ID to retrieve an existing item: https://apps.powerapps.com/play/providers/Microsoft.PowerApps/apps/APP_ID?Title=Hello%20World&ID=2

How to use it with a LookUp column?

If you want to auto-select a LookUp field using an URL parameter, you need to do a few things…

First, we need to add the related table. To do so, click on Data in the left navigation bar and search for SharePoint:

Search for the table and add it.

Second (optional) step: click on the Lookup field in the form and change the Items to show a list of options – if no “Lookup” ID in the URL, then we use the default list of options:

The below formula permits to retrieve the “ID” and “Title” from the distant list, base on the “Lookup” parameter, and to rename the result as {Id:"ID", Value:"Title"}:

1
If(IsBlank(Param("Lookup")), Choices([@'CURRENT LIST NAME'].COLUMN_NAME), RenameColumns(ShowColumns(Filter('DISTANT LIST NAME', ID = Value(Param("Lookup"))), "ID", "Title"), "ID", "Id", "Title", "Value"))

Third, click on the Lookup field in the form and change the DefaultSelectedItems to select the item from the list of options:

The below formula returns an empty selection with {Id:"", Value:""} when no URL param, otherwise it returns the first record for our lookup:

1
If(IsBlank(Param("Lookup")), {Id:"", Value:""}, First(RenameColumns(ShowColumns(Filter('DISTANT LIST NAME', ID = Value(Param("Lookup"))), "ID", "Title"), "ID", "Id", "Title", "Value")))

And finally, we can pass Lookup=ID in the URL to select the related item in the other list

How to deal with new fields?

If you add a new field to your list’s settings, you’ll have to edit the form in PowerApps, and then edit the fields and add the new one:

(I used this article as a starting point)

Transfer an Alexa AWS Lambda function from the online editor to the ASK CLI

When we follow the guide to build a new smarthome skill, it gives the steps to create a function in the online code editor.

But if you prefer to use the ASK CLI, there is some steps to follow…

I first create a fake skill with ask new (using the “hello world” and “AWS Lambda” options).

Once the folder structure and files are created, I edit the .ask/ask-states.json file to reflect the information from the skill I created during the guide.

Then in the folder skill-package I remove everything except skill.json. To find what to put into that file, use the command: ask smapi get-skill-manifest -s <SKILL ID> and copy/paste that code.

Finally, I force the deploy with ask deploy --ignore-hash.

The Lambda function can now be managed locally on your computer and deployed with ASK CLI. You can go to the different skill consoles to delete the fake skill “hello world” you created.

Add a domain to a Let’s Encrypt certificate

For Apache, in the folder sites-available, you need to create your the-new-one.your-domain.com.conf file. Then enable the new site with a2ensite the-new-one.your-domain.com.

You can list all domains associated with a certificate:

1
certbot certificates

Now we add the SSL using certbot. You need to list all the existing domains and add the new one:

1
certbot --apache --cert-name your-domain.com -d first.your-domain.com,second.your-domain.com,third.your-domain.com,the-new-one.your-domain.com

Get email address from a Azure DevOps “by” field in Power Apps Flow

If you need to get an email from an Azure DevOps work item (e.g. from the “Changed By” field), it might be tricky in Power Apps Flow because it will return “John Doe <john@doe.com>”.

To only extract the email from this string, you’ll have to use the below:

first(split(last(split([YOUR_FIELD],'<')),'>'))

Use CTRL and TAB to switch between two tabs in Chrome

It’s super handy to be able to switch between two tabs in the web browser… But it’s tricky to set it up in Chrome!

  1. Install AutoControl: Keyboard shortcut, Mouse gesture
  2. Install the native component as the extension asks for
  3. Add a new action
  4. The trigger is LEFT CTRL and TAB
  5. The action is Switch to previous tab

It should now work to switch between tabs using LEFT CTRL + TAB.

Mise à jour d’un serveur Kimsufi (OVH) depuis Debian 10 (Buster) vers Debian 11 (Bullseye)

Il faut régulièrement penser à mettre à jour son serveur Kimsufi.

Je vais essayer d’expliquer brièvement les étapes à suivre pour cela.

  1. On vérifie les problèmes liés à la mise à jour.
  2. Prévoir une connexion SSH depuis 2 emplacements si possible à cause d’un problème sur SSH durant l’installation.
  3. On va effectuer une mise à jour des paquets avec apt-get update && apt-get upgrade
  4. On va sauvegarder les données :
    1
    mkdir /root/svg_special; cp -R /var/lib/dpkg /root/svg_special/; cp /var/lib/apt/extended_states /root/svg_special/; dpkg --get-selections "*" > /root/svg_special/dpkg_get_selection; cp -R /etc /root/svg_special/etc
  5. Ensuite il est conseillé d’utiliser screen pour pouvoir se reconnecter (avec screen -r) à en cas de déconnexion :
    1
    screen
  6. Le processus de mise à niveau décrit sur le site de Debian a été conçu pour des mises à niveau des systèmes « purs » sans paquet provenant d’autres sources. Pour une meilleure fiabilité du processus de mise à niveau, vous pouvez supprimer ces paquets du système avant de commencer la mise à niveau :
    1
    aptitude search '~i(!~ODebian)'
  7. On peut éventuellement purger les vieux paquets obsolètes. Pour cela on va d’abord les lister, puis on peut les purger si tout semble bon:
    1
    2
    aptitude search '~o'
    aptitude purge '~o'
  8. On peut lancer la commande dpkg --audit pour s’assurer que tout est bon avant la migration. On peut également taper dpkg --get-selections "*" | more et vérifier qu’aucun paquet n’est en on hold
  9. Maintenant il faut remplacer tous les “buster” de /etc/apt/sources.list par des “bullseye” (on pourra par exemple utiliser sed -i 's/buster/bullseye/g' /etc/apt/sources.list)

    On vérifiera aussi les fichiers qui se trouvent dans /etc/apt/sources.list.d, en modifiant par exemple la source pour MariaDB.

    J’ai également dû remplacer la ligne deb http://security.debian.org/ bullseye/updates main contrib non-free de mon fichier /etc/apt/sources.list par deb http://security.debian.org/debian-security bullseye-security main contrib non-free.

  10. Il est recommandé d’utiliser le programme /usr/bin/script pour enregistrer une transcription de la session de mise à niveau. Ainsi, quand un problème survient, on a un enregistrement de ce qui s’est passé. Pour démarrer un enregistrement, taper :
    1
    script -t 2>~/upgrade-buster.time -a ~/upgrade-buster.script
  11. On passe aux choses sérieuses, en commençant par mettre à jour les listes des paquets :
    1
    apt-get update
  12. On va vérifier qu’on a la place suffisante (un message explicite apparait sinon) :
    1
    apt -o APT::Get::Trivial-Only=true full-upgrade
  13. On va maintenant faire une mise à jour minimale :
    1
    apt-get upgrade
  14. Et à partir de là le système va vous questionner… en général choisir l’option par défaut si vous ne savez pas quoi répondre
  15. Puis on continue avec
    1
    apt full-upgrade

Cette dernière étape va durer un certain temps. Une fois terminé, vous pouvez redémarrer le serveur pour s’assurer que tout va bien.

Il est bien de vérifier que la version actuelle de PHP est correctement utilisée par Apache et qu’elle correspond à ce qu’on veut. Pour cela on vérifie la version avec:

1
php -v

Ensuite on regarde les versions de PHP disponibles dans les modules d’Apache :

1
ls -l /etc/apache2/mods-available/php*

Et on regarde celle activée :

1
ls -l /etc/apache2/mods-enabled/php*

On regarde également dans le dossier des modules pour vérifier quelle version on a :

1
ls -l /etc/apache2/modules/libphp*

Si la version souhaitée est manquante dans les modules, alors on l’installe, par exemple pour la 7.4 :

1
apt-get install php7.4 php7.4-mysql

On s’assure ensuite de bien activer la bonne version, par exemple en passant de la v7.0 à v7.4 :

a2dismod php7.0
a2enmod php7.4

Et on redémarre Apache :

systemctl restart apache2

Une fois les erreurs corrigées, on va nettoyer tous les paquets avec :

1
apt-get autoremove

Note : pour arrêter screen on fait CTRL + A puis k.