Java: Simple AES Encryption Working Example

Post Visual Studio nowadays I am developing more dynamic web applications on Java. So again having a requirement to store passwords securely. After long research found the following piece of code which can be used to encrypt/decrypt secure data using random hash value (private key). This code uses Advanced Encryption Standards (AES) which is the symmetric algorithm.

The code uses basic java security jars and doesnt need any thirdparty base64 jar for encoding and decoding passwords.


AES Class 


import java.util.Arrays;
import java.util.Base64;
import java.security.MessageDigest;
import javax.crypto.Cipher;
import javax.crypto.spec.SecretKeySpec;

public class AESEcryption {

private static byte[] key;
private static SecretKeySpec secretKeySpec;

 

public static void setKey(String inputKeyValue)
{
MessageDigest sha = null;
try {
key = inputKeyValue.getBytes(“UTF-8”);
sha = MessageDigest.getInstance(“SHA-1”);
key = sha.digest(key);
key = Arrays.copyOf(key, 16);
secretKeySpec = new SecretKeySpec(key, “AES”);
}
catch (Exception e) {
e.printStackTrace();
}

}

public static String encrypt(String strToEncrypt, String secretKeyValue)
{
try
{
setKey(secretKeyValue);
Cipher cipher = Cipher.getInstance(“AES/ECB/PKCS5Padding”);
cipher.init(Cipher.ENCRYPT_MODE, secretKeySpec);
return Base64.getEncoder().encodeToString(cipher.doFinal(strToEncrypt.getBytes(“UTF-8”)));
}
catch (Exception e)
{
e.printStackTrace();
}
return null;
}

public static String decrypt(String strToDecrypt, String secretKeyValue)
{
try
{
setKey(secretKeyValue);
Cipher cipher = Cipher.getInstance(“AES/ECB/PKCS5Padding”);
cipher.init(Cipher.DECRYPT_MODE, secretKeySpec);
return new String(cipher.doFinal(Base64.getDecoder().decode(strToDecrypt)));
}
catch (Exception e) {
e.printStackTrace();
}
return null;
}
}


Main Class for testing code


public static void main(String[] args)
{
           // private key
    final String privateKeyVal = "abcdEFGHijklmnOPqrstuvwxyz";
    
    String originalText = "thisisMainText";
    String encryptedData =AESEcryption.encrypt(originalText,privateKeyVal) ;
    String decryptedData =AESEcryption.decrypt(encryptedData,privateKeyVal) ;
    
            // printing all variable data
    System.out.println(originalText);
    System.out.println(encryptedData);
    System.out.println(decryptedData);
}

 

In the above example, longer private Key value gives better-encrypted text. Also, don’t forget to keep private key value isolated. This can be achieved by storing the private key into the read-only file on server-side which is having access to the application user only.
Thanks.
Advertisements

Splunk: Using Dynamic Panels

In most of the Organizations, many dashboards come with the big list of panels flooded on the dashboard. As a result, single page dashboard becomes multiple page report. To avoid this Splunk provide an option to create dynamic panels means we can hide panels from the dashboard when the search query is not returning any results.

E.g. consider below picture is one of the critical dashboards with multiple panels leaving supporting to ensure all panels are looked after

dyna

Where below dashboard is the same dashboard as above but with dynamic panels where the output is not shown when searches returned 0 records. This will help support team to monitor effectively with the right amount of data.

dyna2

We just need to make following changes in the dashboard code if we need to change panel visibility dynamic.

  • Linking panel to unique condition token_id

<panel depends=”$cond_token_a$” >

<search><query> —– </query>

<done>

<condition match=”‘job.resultCount’ &gt; 0″>

<set token=”cond_token_a”>true</set>

</condition>

<condition>

<unset token=”cond_token_a”></unset>

</condition>

</done>

</search>

</panel>

In above example, I am setting token to true when search returning at least 1 row. We can change this to any static value or can provide additional filtering in the search string as well.

The code remains same for any dashboard, we just need to ensure to give unique token id for each dynamic panel.

Thanks.

Splunk : Using DrillDown to connect two dashboards

Splunk does provide an option to connect 2 dashboards using drilldown option, that means when we click on any dashboard output it will open linked dashboard rather than search string. We can use drilldown option to map with specific dashboards or can take input from one dashboard and pass as variables to the next one.

  • Using Static Link:-  In order to create the drilldown mapping for any given panel on the dashboard we just simply add the following piece of code replacing mapped dashboard address.

setting drilldown option to all for generic cases

<option name=“drilldown”>all</option>

For Charts: – 

<option name=“charting.drilldown”>all</option>

For Tables: – 

<option name=“drilldown”>cell</option>

<drilldown>

<link> /app/APP_NAME/Dashboard_Name </link>

</drilldown>

  • Using with fix tokens:-

<option name=“drilldown”>all</option>

<drilldown>

<link> /app/APP_NAME/Dashboard_Name?form.tkn_Time.earlier=-60m@m&form.tkn_Time.latest=@m </link>

</drilldown>

Here @m will provide start time from start of the minute and again in the latest till the end of last minute.

  • Using dynamic field values:-  We can also use data published in the current dashboard as input for another dashboard. for e.g. we have dashboard providing the summary of placed orders at given time and another dashboard providing details of individual order from order id. Using drilldown we can link this 2 dashboard allowing users to any order data just by clicking order-id rather than searching and opening two dashboards every time.

 

<drilldown target=“_blank”>

<condition field=“ORDERID”>

<set token=“src_token”>$row.ORDERID$</set>

<link>

<![CDATA[ /

/app/APP_NAME/Dashboard_Name?form.tkn_Time.earliest=$tkn_Time.earliest$&form.tkn_Time.latest=$tkn_Time.latest$&form.tkn_searchID=$src_token$

]]>

</link>

</condition> </drilldown>

</drilldown>

In above examples token values can be changed as per your requriment.

Digital Security: Understanding Ransomware

The word ransom was linked to the kidnapping 10 years back mainly for the purpose of gaining the big chunk of money. But today the equations have changed as it is not limited to the human being. Even with the computers early years, the common destructive approach is virus infection destroying important data and information.

Personal data and information are very important and valuable piece of information and moving toward digital technologies we are becoming more dependent on this information. This can contain pictures, videos, banking/financial information, passwords, certificates and much more. Nowadays cyber criminals have come up with the new approach of ransomware tools which people downloads and unknowingly executes results in encrypting all personal information. Most of the times these files looks like some important update files, pdf documents and sometimes just simple HTML page and Windows machines are most popular targets for these ransomware tools. We have almost 90% of these built for only windows environments.

Ransomware tools mainly come  in 2 categories:

Nondestructive: – this ransomware does not destroy or encrypt your personal information, but creates an impression that your personal information is affected with the virus and needs advance cleanup. Most of the times this is achieved with popups/big screensavers telling end user to contact some number and then requesting remote access or money for data cleanup. One the example of such message is shown below.

hoax-police-warning-big

Destructive: – these are the once which are dangerous and can encrypt all data present on your machines in seconds of execution. Most of the times after execution of such files gives immediate error creating an impression that nothing has happened. But within seconds after that we get the message on the screen telling all your personal information is now encrypted and the only way to recover this information is by paying portions of bitcoins which is equivalent to 100/300 dollars. And yes this message comes with the deadline clock which ranges from 1 to 4 days. One of such example is shown below. Most of such ransomware are still not having any decryption solution unless paying money to the owner of ransomware.

ctb-locker

How can we save our information from Ransomware then ?

  • Always keep one or multiple backups of your critical personal information, pictures and other files.
  • Do not open any emails or attachments which you are not expected to receive. Free lottery/iPhone emails are 100% hoax. None of the company is that rich to give away his money or gadgets for free. So if we receive any emails with attachments that we are not expecting, just better to delete them.
  • Always update anti-virus software. Still most of the antivirus unable to give 100% protection against all ransomware, so better not to run any unknown attachment on the computer.
  • Finally – if we still left with some curiosity to find what is inside in the attachment/document we downloaded/received. First open https://www.virustotal.com/ website. This will show below the screen. You can upload your file to this website first and scan to find ransomware score. This website runs your file against most of the antivirus software available and provides you ransomware score.

virtustotal1virustotal

If results showing all green then that file is safe to execute. If we get at least one hit from any antivirus then better not to run that file.

In summary, it’s individual’s responsibility to protect his/her personal information. Always take backup, strong passwords and never open any unknown file on your machine which could leave you in regret afterward. 

THANK YOU.


Linux: Grep: Searching in Files

GREP is one of the most efficient functionality provided by UNIX for search inside files. We can also use GREP to search in zipped files without extracting or output of the earlier command. Various key functionalities of GREP command is explained further.

Basic usage of grep is to provide your search string inside single () or double quote () and then the path of file/files inside which that string needs to be searched. Following are the examples of GREP command and various input flags. The usage of GREP command is almost same across UNIX/LINUX/AIX or any other environment.

  • grep error *.log

Above command will search for word “error” inside all files ending with extension log in the current directory and display respective rows containing this word. If we want to search inside all directories inside current location then we can simply give “*” instead “*.log”. This command will not consider subdirectories or zipped files.

  • grep error message *.log
  • grep error message *.log

Above command will search for string “error message” inside all files with extension *.log. If we executed this command without double quotes then OS will consider messages as file and will to search inside that file if exists. It will also search inside *.log files.

  • grep –i error *.log

This is the case-insensitive option. Every search inside grep is case sensitive means OS will assume error, ERROR, and Error as a separate string. With “i” flag we can tell OS to ignore case and run the basic search.

  • grep  ‘^error’ *.log

This option will return rows only if they are beginning with error and not inside or terminating with an error. Caret “^” used to when we need to tell OS to find string begins with specific text/pattern.

  • grep ‘error$‘ *.log

Similarly, the dollar “$” option finds strings inside the files ending with given text and returns matching output.

  • grep –v error *.log

Above flag will return rows which are not matching with the given input. Therefore in case of large file sets, we should carefully use this flag, otherwise, it will print everything on the screen or write to the file in the case output diverted e.g. “grep –v error *.log > /tmp/output.log ”

  • grep –f input.txt *.log

We can also give multiline input or multiple parameters inside the file as a search string. Every row from the file OS will treat as the separate search string. Sometimes such input files are used to filter known error messages from the error logs. e.g. “grep error *.log | grep -f input.txt” or “grep –vf input.txt *.log | grep error”

We can also give additional “x” flag (grep –xf input.txt *.log) which will match every line with each line in the file. E.g. if one of the record from input file contains “error” then it will check for rows having only string “error”. It will not return records where rows contain string “error” with other texts such as “error message” or “file exception error”.

  • grep –n error *.log

With “n” option we get row numbers of every matched record from the given files.

As explained earlier we can combine any of these flags to meet search criteria.

  • grep –e “err|excepion|warn” *.log
  •  egrep “err|excepion|warn” *.log

With “e” options we can give multiple search strings in the same command with “|” (pipe) as the delimiter for separating input. Generally, with normal GREP these type of search is the bit expensive from the performance perspective.

  • grep –F error *.log
  • fgrep error *.log

Whenever we want to search for the fixed strings inside one or multiple files we should be using “F” (upper case) option as it returns output faster compare to standard grep. This also referred as fgrep.

  • grep –v^$‘ output.log

Above command will help to remove empty rows from the given input file.

  • grep –r error /usr/input

Above flag is the recursive flag which means it will search for the given string inside given the directory and all sub-directories if exists. Sometimes it can impact performance if we have too many subdirectories with big files.

Normal grep command does not search inside zipped files. As a workaround, some people unzip such files and then try to search inside unzipped files. We can use zgrep instead to search inside zipped files as below. This command cannot search in normal files.

  • zgrep  error *.log.gz

With all above command, we can always use “more” command with a pipe to view pagewise data of searched output in default editor.

To summarise GREP is one of the most used commands by UNIX users and sometimes we use longer commands or big scripts due to non-awareness complete functionalities. There are more flags available with GREP but I have tried to list down some of the key flags which are mostly required.

Thanks.

Oracle: Understanding REDO

In oracle, REDO concepts are not same as UNDO functionality. Like Microsoft word we cannot just REDO any operation after UNDOing. The scope of REDO is limited to the recovery purpose and syncing standby databases. REDO logs don’t hold any data, means REDO logs are actually statements fired by users as part of DML operations. These statements are mapped with SCN(System Change Number). SCN is unique number helps oracle to identify specific REDO operation. In Oracle, REDO logs are initially stored in the Memory (Redo log buffer part of SGA) and whenever user issues commit, Oracle flush all entries to the disk since last SCN change. The activity also takes place every 3rd second and when REDO log buffer is 1/3rd full. This is achieved by Log Writer process (LGWR) runs in the background.

The database must have more than 1 REDO log file and it should be multiplexed. If we do not have more than one file then Oracle will start overwriting same log file again and again. In the case of recovery then we will have only recent data available, therefore we must have more than one REDO log file. Multiplexing means creating more than 1 copy of REDO log files. So if one file set gets corrupted, then another can be used as backup options. This needs to happen parallel for Synchronization otherwise, both log files will have the different set of data which will not be having any use for recovery purpose. Apart from this, we must archive REDO logs files as well for higher recovery options. To achieve this database needs to be open in ARCHIVELOG mode.

How REDO logs are used then? During database recovery process, we can only recover database till the time when the last backup was taken. After this REDO logs are used to REDO all transactions sequentially till crash point. In the case of incomplete Recovery scenario, as well same technique followed to recover data. DBA uses SCN numbers to identify safe point for the recovery process.

REDO logs are also used to bring Standby(Disaster Recovery Site) databases in sync with the production database. There are various sub-categories in this as to how we can transfer data to the Standby sites depends upon how frequently active database is getting updated and impact of data non-availability on the business.

In summary, REDO logs are one of the core aspects of the recovery process and no functionality directly available to the end user usage like UNDO such as rollback or flashback etc.

Oracle: Understanding UNDO Usage

Having effective UNDO functionality is one of the key requirement for your Database. Oracle is working beautifully by managing UNDO functionality. So what is the use of undo and how it is related to the application/batch process coding?

Every uncommitted DML generates undo data, helping the database user to rollback transactions up to certain time. This depends upon the size of UNDO Tablespace and retention time. Mostly housekeeping or batch (delete/update) process, background DML jobs generates more UNDO data as they consist high amount of records. There are some cases when inserts also impact UNDO in the activity index rebalancing in the case of large indexed tables. In the normal scenario, every delete generates good amount UNDO data. Oracle uses this information to maintain data integrity. When database user issues rollback command, oracle UNDO all activities at database level till the last commit. The database uses UNDO tablespaces to store all this information. Also, when batch processes are updating some information; but not committed, and at the same time some database user queries on the same set of information then oracle fetch old data from the UNDO tablespace and rest of the data from the actual tables. In such scenario, Oracle will not allow the user to view uncommitted data maintaining data integrity.

Oracle provides functionality to configure UNDO retention time, means till what period database can store UNDO data. After setting up this value we can enforce database to maintain UNDO data up to given time. If we are setting force undo retention then we must ensure to allocate the right amount of size for UNDO tablespace. Otherwise, we will get cannot extend UNDO tablespace errors.

Oracle also provides flashback query feature nowadays using which we can view dataset in the given time. How far we can go back is depends upon the size of the UNDO tablespace and how much frequently table information is getting updated. In the normal scenario, we can look back up to 24 hours or a week depends on the undo retention parameter if database operations are very quite. In order to run flashback query on the specific table, we need to have flashback grants for that table for non-schema owners. Means earlier after commit there was no option to find what was the value of the dataset earlier unless backup which was not an easy route. Using flashback queries is not difficult, we just need to add “as of timestamp” option and need to provide right value in the past with correct timestamp format.