Author: rr

  • Veeva integration

    Veeva integration was quite a new to me as I have used it only partly as a user for review and approvers for documents. In this specific case, I had to move upwards with my Veeva knowledge. It was more about managing users and “business roles” in Veeva Safety via webservices REST API connector in IdentityIQ.

    I understand that people are busy, but It surprised me that I was the only one doing research in the Veeva documentation before the initial meetings with stakeholders and Veeva representatives.

    That was also the main cause for confusion with the first two initial meetings as people talking about different objects in different systems. Lucky for me, it was pretty clear and I had a good head start and overview from the beginning to mentally design the solution, or even if possible.

    We decided for PoC to test the basic functionality and make the webservices connector as a base for other applications.

    The Veeva’s API docs were quite sufficient to onboard, but there was/is an issue with “User Role Setup” object. It does not fit in the IGA as it is created on the go. Obviously Veeva does not have the same security model as Active Directory, but with these objects generated on the go, would bring additional customizations to IdentityIQ.

    After several meetings, me and my colleague were able to propose few solutions. One of them full customization from IdentityIQ for Veeva logic and second one Veeva creating a “Business Role” object and we would manage only memberships.

    To my surprise Veeva stepped up and created some kind of object that fits their security model and UPS solution, but it also simplified the way how we would manage the logic. Which would be none.

    In parallel, while Veeva was developing their “Business role” solution, I have created and tested the webservices connector and made the basic PoC with User Create/Update/Enable/Disable operations and assigning and removing entitlements. That also includes aggregations for accounts and groups.

    The best part I did, was that I have analyzed and debugged passed objects before and after REST end point calls. That allowed me to have a strong foundation and what actually happens during the webservices operations. The documentaiton from SailPoint was ok, but was missing a lot of information how to do things. Although they have some example rules that were quite useful.

    In general I would recommend to anybody who is setting their first webservices connector, to go through the documentation, debug and inspecting the objects before and after operations. That will give you full overview and allows you to make the necessary adjustments for the custom connector details.

    What SailPoint provides is the framework, the JSON/XML attributes in body or responses, you have to handle and setup. Most likely you might need to make your own authentication or at least we were for Veeva as they use sessionId.

    Here is the debug logger what you can use as a good start when developing.

    if (log.isDebugEnabled()){
        log.debug("BODYDEBUG" + requestEndPoint.getBody());
        log.debug(requestEndPoint.getFullUrl());
        log.debug(requestEndPoint.getResMappingObj());
    
        
        Attributes attrs=application.getAttributes();
        if (attrs != null) {  
          for (String key: attrs.getKeys()) {
            //log.debug(key + " " + attrs.getString(key));
          }
    
        } else {
          log.debug("No attributes avaiable");
        }
        
        log.debug("Operation: " + requestEndPoint.getOperationType());
        
        if (provisioningPlan != null) {
        	log.debug(provisioningPlan.toXml());
          
          List accountRequests = provisioningPlan.getAccountRequests();
          for (accountRequest : Util.safeIterable(accountRequests)) {
            List attributeRequests = accountRequest.getAttributeRequests();
            if (null == attributeRequests){
              break;
            }        
            for (attrReq : Util.safeIterable(attributeRequests)) {
              if (null != attrReq){
              	log.debug(attrReq.getName() + " " + attrReq.getValue());
              }
            }
          }
        } else {
        	log.debug("No provisioningPlan");
        }
      }
  • AD Account with mailbox and shared mailbox with O365 E1 license

    This story is about converting and onboarding manual entry (partly scripted) process for service accounts with mailboxes/O365 license.

    How to deal with hybrid setup with AD, EntraID and Exchange?
    AD and EntraID is fairly solved automatically with AD Connect/Sync for attributes to move both ways (mostly AD to EntraID). The Exchange/O365 is a bit extra for me that I have not been deeply involved yet. Although I have previously dealt with crazy bug (memory buffer with Write-Output) from Python calling powershell scripts via WinRM. So yes I knew that there are powershell commands involved for creating and setting up the Exchange mailboxes.

    How did I approach the task?

    My brain functions in a way that I need to understand the topic and have certain questions answered to feel comfortable and find the best future solution. This task sounded fairly straightforward. Convert the current manual approach to more automated one into IdentityIQ, so I started with the current process.

    First I went into deeper analysis and further to understand:

    • What is the current setup
    • What is missing from current setup/are there issues?

    I had several meeting with different people involved in some way to find out about the tools and current steps. In addition, I’ve got access to the tools repo and confirmed functionality. It took a while, but I was able to map it to several steps.

    The tools included these areas (not ordered)

    • Create AD user
    • Create AD groups
    • Assign user to a group
    • Powershell commands for on-prem Exchange (Exchange)
    • Powershell commands for Exchange Online (EXO)
    • Assign distrubution group and owner
    • Onboarding to IdentityIQ and assigning owner

    Some new questions came to the surface, like are these steps still up to date? Or what are the real end-user usecases? The first question seemed ok to be unswered on my own, but the second one was not fully clear. There were also minor issues. Also additional finding that many service accounts did not have an owner/responsible person in order to charge department costs.

    What is the correct order for hybrid setup?

    • Create AD account
      • With email attribute, UPN and so on
    • Powershell command Enable-RemoteMailbox
      • Wait for AD account to be created, low if same DC used
      • Adds proxy addresses, targetAddress and other Exchange attributes
    • AD Sync to EntraID
      • Creates a user in EntraID
      • Creates a user, not mailbox, in EXO (if correct attributes)
    • Assign E5 License
      • Creates EXO mailbox
    • Powershell command Add-MailboxPermission
      • Assign Full/Read permissions
      • Wait for EXO Mailbox to be created
    • Add-RecipientPermission (send as)
      • Assign Full/Read permissions
      • Wait for EXO Mailbox to be created

    After finding out, the correct steps, I had to think how would that work in IdentityIQ. There were a few challenges to clarify and design the solution properly with some additional side-effects. In the reality there were more changes, e.g. mailbox without O365 license, some users limited to onprem Exchange only, and also posibility to change the mailbox type/license.

    1. Refine and define requirements
    2. Distribution group setup
    3. Run the commands in order with wait times
    4. Inform the requestor when last step succeeds
    5. Who will support the issues/errors
    6. Dependency on external scripts
    7. Cooperation with AD scripts
    8. Future – onboarding more powershell scripts?

    There were many things to consider and I made estimations to move forward.

  • Password page takes long time to load

    I got assigned task with two pages loading too long time. The first one was about custom form with password reset functionality (solved). The second one was poorly described and was about some certifications take long time to load (moved to next post).

    Password reset page

    Lets start with the custom page for password reset. Some basic info. Different roles have access to different users and acounts, e.g. manager, user administrator or basic user with admin accounts.

    Workflow did not contain extensive QueryOptions or context search, but the Form did have quite large QueryOption and quite a few filters for AND and OR clauses.

    My first approach and funny silly debug mistake.

    This was initital process to confirm that the slow performance is caused by the custom code and not by OOTB IIQ.

    This line is for future retrospective if ever changes: so far I have not found a better way on how to debug XML objects in IIQ. The only way is through “console output messages” into a log file.

    These are the last lines of the first script section.

    qo.addFilters(filters);
    return (context.countObjects(qo,));

    I add lines above to the Form objects scrpt fields, like this

    log.error("script identity start")
    ...
    qo.addFilters(filters);
    log.error("script identity end");
    return (context.countObjects(Links.class,qo) > 0);

    Second script for hidden variable

    log.error("script hidden start")
    ...
    qo.addFilters(filters);
    log.error("script hidden end");
    return (context.countObjects(Links.class,qo) > 0);

    Now I open the password reset page and I get a bit confused about the output and timings between the steps in the log file.

    It shows some some 2 seconds between steps, rather than start/end log messages (not exact log messages)

    script identity end – 14:03:14
    script hidden start – 14:03:16

    Bit puzzled, but after a while i can see it clearly and smile about how silly i was 🙂

    Correcting the debug messages and confirming the delay is due to the countObjects DB search in both scripts.

    log.error("script hidden start")
    ...
    qo.addFilters(filters);
    log.error("script hidden end1");
    int objs = context.countObjects(Links.class,qo);
    log.error("script hidden end2");
    return (objs  > 0);

    Moving down from TEST to my DEV environment

    My spadmin in my own dev is set up without additional roles or anything, therefore the admin user cannot reset any passwords with the custom functionality. It is a good thing, because the query takes around 26 seconds and returns 0 records.

    Assumptions:
    It must go through whole table to find nothing. (correct)
    It must be due to missing index. (wrong)
    It should be easy to locate the DB query with trace enabled (correct)

    I used Dev tools in Edge and check the timings for the API call. I enabled trace on all objects in log4j2 and soon I was able to see the two queries.

    Combined select query with one inner and 3 left joins on Identity table. I am not database expert, but I could see that each left join is increased exponentially from the live statistics in MSSQL studio.

    I removed the left joins and related OR clauses. One inner and one left join as a result. Suddenly It returned results (still 0) within a second. I was on the right path.

    Reviewed the code for QueryOptions and filters. Logically it could have been split, so I did split it with some extra optimizing and added comments for my colleagues on why it is split in simialr query options and context.countObjects calls.

    Now it runs on my DEV within a second. Retested and confirmed usecases for different user types, deployed to TEST. 4 seconds to under 1 second(success). Finally created a PR and waiting for the next release to save 8 seconds in Production for each password reset.

    How did I debug in this case?

    • used error debug messages to find/confirm the problematic code
    • set up DEV environment to not run any tasks, that allows me to run trace on all objects and classes used by IIQ when needed
    • read trace logs for SQL queries and timings



  • IdentityIQ upgrade 8.4p2 and unrelated performance issues

    Written as a story, if you want to skip to Performance issue section, then scroll down to the end section to Solution.

    I have been working for 5 months on IdentityIQ upgrade project as technical lead. It went ok and it was delivered on time. Could not do it with the team help and coming together for regression testing and finalizing tasks.

    The upgrade weekend went throught with some minor issues. It was long. Minor mistakes too. I blame the rushed two last weeks with many tasks and many people involved. Either way overall success, very happy and tired.

    Next day minor performance issue, probably system catching up (tasks) after being off for 2 days. Tuesday, not so minor anymore. I was tryign to get involved, but was refused. In my opinion good decision on one hand as it might have been something else than upgrade and had to be confirmed before me joining. I was eventually called in for help week after to take over from my colleague, who did not deny or confirm upgrade as a cause.

    Performance issue

    The issue was about overall slowness affecting everything and everyone. It was not isolated part of the application like specific task or GUI section. It was everything, UI, Batch and all operations with spikes.

    What we knew that has happened, chronologically:
    IIQ upgrade (weekend)
    Minor performance issues (Monday)
    Major performance issues (Wednesday)
    Collapse (Friday)
    SQL server Patch (weekend)

    We have not upgraded Java and Tomcat, so yes, you guessing right it was either upgraded IIQ OOTB functionality, our custom code or Database. Right?

    For the first day, we were kind of jumping between IIQ and DB and trying to debug different things. Bit of blaming and also clarifying reasons for causing this or more specifically excluding reasons what it could not have been.

    Our first conclusion with DB team was that we need to rebuild indexes as it somehow helped previously with a lot of table updates (not much mine idea). It helped a bit for a day. Then we were hit again.

    I think at that point we were able to narrow it down to database, because the queries were spiking from IIQ servers, but also directly on the DB. Therefore it could have not been IIQ servers that cause the performance hit. That was a good conlusion.

    The only trouble was there was maybe 30-50% CPU utlization. RAM was used 90% and managed by SQL server to utilize. There were no visible spikes. That went for another day with some random queries testing and also whether the Perform maintenance tasks are the cause and slowing down everything else.

    In the next day I had some enlightment that it could be the Disk IO utilization. We haven’t received any info about it yet till now. Once provided , we could see it. Yes the utilization was 100% almost all the time. The spikes were visible in the graph and confirmed our behaviour. Finally moving somewhere – my first win.

    We were still trying to figure it out what is causing this. To be honest I didn’t know as I am not database expert, but for sure I knew there is something wrong with the Disk IO. Being tired of taking blame that it is due to upgrade. I requested a Disk IO statistics for 1 month to compare how it really as before the IIQ upgrade.

    Solution

    And there it was. I was angry, I was happy, I was cursing, I wanted to share it and kick some ass.

    The Disk IO statistics report as a image were ok, but there was a pattern of Disk IO upper limit as there was a horizontal line on a graph with 100% utilization since upgrade date

    At the first glance, it looked like since upgrade, but when I zoomed in and inspected the blurry image, it turned out it was limited a day or two before upgrade. – Yes, another win

    Raised the question with the DB/storage team and ofcourse there was a change for setting an upper limit for our service. marked as low impact, low risk change 🙂 I couldn’t believe it. The guy was very unproffesional and still blaming us for the change, but at the end reverted it (good push and good communication history proof from my colleague). I did not have that. I was pushing for change to be reverted due to incident.

    Once reverted and upper limit lifted. All went back to normal and we could finally leave for a weekend.

    Good approach

    Push the other teams or people to provide proof or gather more information

    Go for your hunch, but verify with proof or confirm logically

    Be the one to take a lead or stand up when needed

    Do single change at a time to find the rootcause

    Lessons learned

    Check for changes sooner when large incidents occur (coudl have been found sooner)

    Take a moment and brainstorm few theories in the beginning (this could increase finding the right path from the beginning)

    Avoid pushing your opinion without any proof (hunch is ok, but be reasonable)

  • Powershell overview and basics

    Run with Powershell
    powershell.exe -command "& 'D:\powershell.ps1'" -ExecutionPolicy Bypass
    
    

    How long script takes?

    [datetime]$startDate = Get-Date
    [datetime]$endDate = Get-Date
    Write-Host $(NEW-TIMESPAN –Start $startDate –End $endDate )

    Find duplicate user values

    Get-ADUser -Filter { employeeID -like "*" } -SearchBase "OU=yourOU,DC=domain,DC=com" -property employeeID | Group-Object employeeID | Where-Object {$_.Count -ge 2} | select -ExpandProperty group | Select-Object Name, UserPrincipalName, SamAccountName, employeeID

  • Python

    Installation

    sudo apt install python3.10
    sudo apt install python3-pip
    pip install --user pipenv
    python -m site --user-base
    export PATH="$PATH:/pathAbove"
    pipenv install
    pipenv install flask
    

    Data structures

    Tuple

    Cannot edit after it is created – Immutable.

    thistuple = (“1”, “2”, “3”)
    for x in thistuple:
      print(x)

    List

    thislist = [“1”, “2”, “3”]
    for x in thislist:
      print(x)

  • Databases

    Postgres

    sudo apt install postgresql
    sudo service postgresql status
    sudo passwd postgres
    sudo -u postgres psql

    create database dbname;
    create user username with encrypted password 'password';
    grant all privileges on database dbname to username;

    Useful commands

    ALTER TABLE table ADD column dataType NOT NULL;

    \l list databases
    \c select database
    \dt list tables

  • Move a wordpress site to the new server

    Backup

    mysqldump -u root -p database1 > database.sql

    tar -zcvf data.tar.gz data

    Restore

    CREATE DATABASE database1;
    CREATE USER ‘user’@’localhost’ IDENTIFIED BY ‘pwd’;
    GRANT ALL PRIVILEGES ON database1 . * TO ‘user’@’localhost’;
    FLUSH PRIVILEGES;

    mysql -u root -p database1 < database.sql

    tar -zxvf data.tar.gz
    new website:

    curl https://wordpress.org/latest.tar.gz | sudo -u www-data tar zx -C /srv/www

    update wp-config.php with the new password

    sudo chown -R www-data:www-data /var/www/data
    find /var/www/data -type d -exec chmod 755 {} \;
    find /var/www/data -type f -exec chmod 644 {} \;

    Server setup

    Reverse nginx proxy settings

    HTTP Apache2 settings

  • Ubuntu basic commands

    Update packages/apps

    sudo apt update && sudo apt upgrade

    Compress/Decompress a file

    tar -zcvf compressedFileName.tar.gz directoryName

    tar -zxvf compressedFileName.tar.gz

    WSL use ssh key on remote server:
    eval ‘ssh-agent -s
    ssh-add -t 60m ~/.ssh/id_rsa
    ssh -A user:address

    MySQL

    sudo apt install mysql-server
    sudo mysql_secure_installation
    sudo /etc/init.d/mysql start
    sudo service mysql restart

    For WSL automatic start after restart:

    sudo update-rc.d mysql defaults

    SSH

    adduser roman
    usermod -aG sudo roman
    su - hideo
    mkdir ~/.ssh && chmod 700 ~/.ssh
    touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys
    nano ~/.ssh/authorized_keys
    sudo ufw app list
    sudo ufw allow OpenSSH
    sudo ufw enable
    sudo ufw status
    sudo apt update
    sudo apt upgrade -u
    sudo reboot

    nginX

    sudo nano /etc/nginx/nginx.conf
    uncomment server_tokens off;
  • Win10 tools and settings for work with Ubuntu

    I spent a last few years working in Ubuntu and Linux Mint. Now I have a new computer and will do the setup with Win10 and WSL2.

    WSL settings and setup

    Turn Windows features on or off:

    • Telnet client
    • Virtual Machine Platform
    • Windows subsystem for linux

    Powershell:

    wsl --set-default-version 2

    Tools and apps

    • Ubuntu – install from windows appstore
    • Windows terminal – install from windows appstore
    • Docker desktop – needed for running docker in linux subsystem
    • Dbeaver.io – for accessing databases
    • Visual Studio Code

    Ubuntu settings

    ssh key

    windows terminal -> settings -> starting directory /mnt/d/

    Trouble with accessing D: drive as a regular user
    create or edit file /etc/wsl.conf and change default username to yours:

    [user]
    default=roman