Bubble Sort is the simplest sorting algorithm that works by repeatedly checking and swapping the next adjacent elements if they are in wrong order.
Time complexity :
In Average/Worst Case : it usually takes n number of passes to sort an array and complexity becomes O(n^2) In Best Case : it takes O(n) time complexity , and it happens only when the array is already sorted.
Bubble Sort Example :
// Java program to implement Bubble Sort class BubbleSortPT { void bubbleSort(int arr[]) { int n = arr.length; for (int i = 0; i < n-1; i++) for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) { // swap arr[j+1] and arr[i] int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } }
/* Prints the array */ void printArray(int arr[]) { int n = arr.length; for (int i=0; i<n; ++i) System.out.print(arr[i] + " "); System.out.println(); }
// Main method to test the above code public static void main(String args[]) { BubbleSortPT ob = new BubbleSortPT(); int arr[] = {64, 34, 25, 12, 22, 11, 90};
Design patterns for application development are programming language independent strategies for solving a common problem. It means a design pattern represents an idea, not a particular implementation. By using design patterns you can make your code more flexible, maintainable and reusable.
The concept of design patterns in software development was originally presented in the 1994 book Design Patterns: Elements of Reusable Object-Oriented Software. The book was written by four authors who are now known collectively as the “Gang of Four”.
A design pattern provides a general reusable solution for the common problems occurs in software design.
It’s not mandatory to implement design patterns in your project always.
However, the lack of applying successful patterns will surely be visible as the effect of increased development time/effort/cost/bugs/rework and the inability to easily adapt to changing customer requirements.
So How do you find a right design pattern ?
To find out which pattern to use. You just have to try to understand the design patterns and it’s purposes. Only then you will be able to pick the right one.
Types of Design Patterns
Three major types of design patterns for application development.
Creational design pattern is about class instantiation or in other words object creation.
Further there are different types of creational design pattern:
Factory Method
Abstract Factory
Prototype
Singleton
Builder
Object Pool
2. Structural Design Patterns
This pattern is about organizing different classes and objects to form some other functionality. Structural class-creation patterns use inheritance to compose interfaces.
Some Structural design patterns are :
Adapter
Bridge
Composite
Decorator
Flyweight
Facade
Private Class Data
Proxy
3. BehavioralDesign Patterns
These patterns are about identifying communication pattern between different objects and realize these patterns. These patterns are most specifically concerned with communication between objects.
Some examples are :
Chain of responsibility
Command
Interpreter
Iterator
Mediator
Null Object
Observer
State
Strategy
Memento
Visitor
Template method
Five(5) Most important design patterns
1. Singleton
This pattern Ensure a class has only one instance, and provide a global point of access to it.
Scenario: Where application needs only one instance of an object. In addition to it, lazy initialization and global access are necessary.
There are several examples of singleton object where only a single instance of a class should exist such as caches, thread pools, db connection pools and registries.
How to make a class Singleton?
Define a private static attribute in the “single instance” class.
Define a public static method/function in the class – this is to be used by clients to access the object outside this class.
Do “lazy initialization” of object (i.e create object on first use) in the accessor function.
Define all constructors to be protected or private.
2. Factory Method
In Factory Pattern you can define an interface for creating an object, but let subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.
The new operator is used to instantiate a class.
We should not prefer using new keyword to instantiate a class object every time as it also makes the code tightly coupled with that particular class. To accomplish this, in Factory method objects are created by calling a factory method instead of calling a constructor.
3. Observer
This pattern defines a on-to-many dependency between object, such that when one object changes its state the other dependant objects are notified.
Model the independent functionality with a “subject” abstraction.
Model the dependent functionality with an “observer” hierarchy.
The Subject broadcasts events to all registered Observers.
The Subject may “push” information at the Observers, or, the Observers may “pull” the information they need from the Subject.
The pattern consists of two actors, the observer who is interested in the updates and the subject who generates the updates.
For Example: when you subscribe to a news feed from a social media website, whenever there is a new post the subscribers would see the new post.
4. Builder
As the name suggests the builder pattern is used to build objects.
Sometimes the object creation can get complex, one object can have multiple sub-objects, To separate the construction of a complex object from its representation so that the same construction process can create different representations.
The builder pattern might seem similar to the ‘abstract factory’ design pattern but one difference is that the builder pattern creates an object step by step whereas the abstract factory pattern returns the object in one go.
5. Adapter
This design pattern allows incompatible classes to work together by converting the interface of one class into another. Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces.
Adapter is meant to change the interface of an existing object. Facade defines a new interface, whereas Adapter reuses an old interface. Remember that Adapter makes two existing interfaces work together as opposed to defining an entirely new one.
For example : If you have two applications, with one producing out output in XML format whereas the other requiring JSON as input, then you’ll need an adapter between the two to make them work seamlessly.
Summary
In this article, we learnt about Design patterns for application development and its importance, we also saw 3 different categories of design patterns and finally had a look at 5 basic and important design patterns.
Spring Security is a framework what spring calls it. It provides authentication, authorization, and protection against attacks. It has become kind of a standard for securing spring based applications. In this tutorial we will learn about spring security with spring boot and its example.
Spring Security requires a Java 8 or higher Run-time Environment.
Spring security Features
Authentication
Authorization
Protection against common exploits
1. Authentication
There are many ways to authenticate users, like the most common way is by requiring user to enter username and password. There are many Password Storage Techniques like.
Using Spring Security PasswordEncoder. Using BCryptPasswordEncoder. Using DelegatingPasswordEncoder. Using NoOpPasswordEncoder.
2. Protection against common exploits
Spring Security provides protection against common exploits. Whenever possible, the protection is enabled by default.
CSRF (Cross Site Request Forgery) Attacks Default Security Http Headers.
As a framework, Spring Security does not handle HTTP connections and thus does not provide support for HTTPS directly. However, it does provide a number of features that help with HTTPS usage, like.
Redirect to HTTPS Strict Transport Security Proxy Server Configuration
Spring Security with Spring Boot Example
1. Project Structure
2. REST Controller – EmployeeController
First creating a REST controller class like this, with few HTTP methods like GET, POST, DELETE.
@SpringBootApplication public class SpringSecurityApplication {
public static void main(String[] args) { SpringApplication.run(SpringSecurityApplication.class, args); }
}
OUTPUT
Example 1 : Unauthorized Login Unsuccessful because of Spring Security
// With a different user(user123) who is not an authorized user C:\>curl localhost:8080/api/employees -u user123:password {“timestamp”:”2020-04-05T14:39:25.805+0000″,”status”:401,”error”:”Unauthorized”,”message”:”Unauthorized”,”path”:”/api/employees”}
// Calling Delete API with a user who is not authorized to access Delete API C:\>curl -X DELETE localhost:8080/api/employees/1 -u user:password {“timestamp”:”2020-04-05T14:37:23.688+0000″,”status”:403,”error”:”Forbidden”,”message”:”Forbidden”,”path”:”/api/employees/1″}
DELETE API Call : with user=admin and password=password
C:\> curl -X DELETE localhost:8080/api/employees/1 -u admin:password deleted employee with id 1 successfully
Try testing in Browser as well
Open the browser : localhost:8080/api/employees – it will redirect to /login first
Provide the right credentials as configured in the SpringSecurityConfig class
Summary
In this tutorial we learnt how to use spring security with REST APIs and spring boot. We created a spring boot REST based simple project and secured it using spring security. At the end we also saw the output of the code using curl command lines as well as in the browser.
Please comment below for suggestions and questions, I hope you liked it !
Security is the biggest API technology challenge seen today to be solved. Organizations today Understand the Need for REST API Security or any API Security for that matter.
Why is Security Important?
Api Security is one of the biggest challenge that everyone including IT organizations want to see solved. Solving Security related issues will lead to growth of APIs.
In case your API does not have an Authorization/Authentication mechanism, it might lead to misuse of your API, loading the servers and the API itself, making it less responsive to others.
Factors Not to Overlook
1. Protection of Data :
Data protection is one of the most important security concern. It is really very important to define access rights to different kinds of api methods (especially PUT and Delete methods).
Good level of authentication is needed to access these apis and these should also be logged for audit.
2. Protection against Attacks :
There are several kinds of attacks which can occur on APIs, to name a few
Injection Attacks :
Like the SQL injection and the XSS cross site scripting attack, where an attacker code which is malicious is injected or embedded to a non-secure applications or softwares. In case of APIs untrusted data can be transferred to the APIs as a query or command and if this data is successfully injected as the input to the APIs can do a lot of damage or leak information. You can handle this by adding input validations and constraints.
DoS Attack :
In Denial of Service attack, attackers pushed a load of messages to the server in this case the API servers. With the aim to request the server multiple invalid requests making the API non-functional. In case someone succeeds in making a DoS attack, it meants that who-so-ever was the consumer or user of those API services will not be able to access it anymore.
Exposing Senstive Data :
Senstive data may include information like user personal information, tokens, passwords, social security details, banking details like credit cards etc. This Sensitive data requires high security, for this sensitive data can be encrypted using many techniques like SSL certificates, TLS.
Authorized Access Only :
Authorization is one important factor to pay attention to. Missing on this part may lead to access to valuable or senstive information. It can make the service vulnerable to attackers or any user who is not suppose to have that extra peace of information for that matter. Proper access controls should be given to it’s users.
3. Anti-Farming :
Today, RESTful API’s most common use case is the online Booking industry, many Huge websites have a business model of consuming multiple multi-sector services like Hotel, airline ticketing, movie tickets, etc and taking advantage of APIs actually built by other individual companies in these sectors.
In cases like these if your APIs are not secured it does not have an auth mechanism, it may lead to miss-use of your APIs and also it can have performance issues or it can become non-responsive to other users.
Ways to Secure APIs 🙂
RESTful APIs are stateless, so the auth security must not depend on the session or cookie alone, these auth parameters should be validated on each and every request to the server most likely using the headers. There are multiple ways to secure RESTful APIs, we will put light into some of them below.
Basic Auth
API Keys
OAuth
JWT
1. Basic Authentication
Probably One of the simplest way to implement access controls in RESTful APIs. It does not require any cookies or sessions, it just passes auth credentials through the HTTP headers.
So it involves the Client sending userid and password seperated by a single colon encoded with Base64 all together in a string.
Example : userid “maverick” and password “pass123” seperated by single colon.
maverick:pass123
Authorization: Basic bWF2ZXJpY2s6cGFzczEyMw==
The basic auth is not very secure method of user authentication, the most serious flaw with basic auth is that you pass userid and password over the network in the header in the form of encoded string.
But Still if you are using this technique then it should be used with TLS or SSL (https)protocol in order to protect sensitive information.
2. API Keys
API keys are for projects or applications, authentication is for users. For Example, Google Cloud exposing API Keys to access it and identify the application.
While API keys identify the calling project, they don’t identify the calling user. For instance, if you have created an application that is calling an API, an API key can identify the application that is making the call, but not the identity of the person who is using the application.
You use API Keys when :
You do want to block anonymous traffic. API keys identify an application’s traffic for the API producer.
You want to control the number of calls made to your API.
You want to identify usage patterns in your API’s traffic.
You cannot use API Keys for :
Identifying individual users — API keys don’t identify users, they identify projects. Secure authorization.
Identifying the creators of a project/application.
3. OAuth 2
The OAuth 2.0 is an authorization framework that enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner or by allowing the third-party application to obtain access on its own behalf. RFC 6794 .
OAuth gained popularity from usage by Google, Facebook, Microsoft and Twitter, who allow usage of their accounts to be shared with third-party applications or websites.
OAuth works over HTTPS and authorizes devices, APIs, applications and servers with access tokens rather than credentials.
OAuth 2.0 can be used to read data of a user from another application without compromising the user’s personal and sensitive data, like user credentials. For Example user using Facebook OAuth login for logging into Quora. It also supplies the authorization workflow for web, desktop applications, and mobile devices.
To simplify, you can thing it as a hotel key card. If you have that key card, you can get access to you room and other resources. But how do you get a Key card? You will have to do an authentication of your identity and booking at the front desk reception. After they authenticate you and give you the key card, you get access to other resources of the hotel as well.
OAuth Tokens :
The OAuth uses tokens, to authorize user. Access Tokens and the refresh tokens.
Access Tokens :
Are the tokens used by the client to access the resource API, they have an expiry. This is not something used by sercret clients but are available with public clients.
Refresh Tokens :
These tokens can live much longer like in days,months,years. This token is used to get new access tokens. To get a Refresh tokens applications typically need secret clients with authentication.
Access token can also be put in the authorization header, eg:
Authorization: Bearer 0123456789abcxyz
4. JWT – Json Web Token
What is JSON Web Token, or JWT (pronounced as “jot”)? It is one of the Token authentication standard. It allows you to digitally sign information(called as claims) with a signature which can be verified later with a secret signing key.
JWT can store any type of data, which like OAuth access tokens should be passed in the authorization header.
Try it out : Create your own JWT token here – https://jwt.io/
How JWT works ?
How Secure is JWT ?
Remember not to put sensitive information in it for example password, id’s, SSN, etc.
What if someone Steals my JWT ?
So you have to be careful about it, how you pass your JWT, it should pass through HTTPS protocol and it should be in conjunction with other established authentication mechanisms like the OAuth, etc.
JWT Use case
These days JWT being very common can be used for authentication, in Single sign on, Subsequent API request authorization.
Summary
In this article, we looked into one of the most challenging part in APIs, the REST API Security. We looked into some of the important factors which makes the API security even more important to understand.
Then we looked into various ways to secure REST APIs. To name a few methods like Basic Authentication, API keys, OAuth2, JWT.
I hope you liked this article ! Please leave your comments below.
Related posts
RSS Error: WP HTTP Error: A valid URL was not provided.
We will go through Apache Kafka Configuration settings which you need to do as part of setting up Apache Kafka.
Four Kafka Component Settings
Broker Settings
Producer Settings
Consumer Settings
Zookeeper Configuration with Kafka
Apache Kafka Configuration
1. Broker Settings
The Overall Performance of Kafka depends on the following Sub-settings.
1. Connection Settings
Zooker session timeout default value is 30000 ms(milliseconds).
zookeeper.session.timeout.ms
Within this time the server sends Zookeeper heartbeat signals, if it fails to do so the server is considered to be dead. Do not set this value too low, otherwise it will falsely consider a server dead, also do not set this value too high, otherwise zookeeper will take too long to determine a truly dead server.
2. Topic Settings
For each topic, Kafka maintains a structured commit log with one or more partitions. In general, the more the partitions in a Kafka cluster, more parallel consumers can be added, resulting in higher throughput.
Important Topic Properties
auto.create.topics.enable
With this property set to true nonexistent topics get created automatically with a default replication factor
default.replication.factor
For high availability production systems, you should set this value to at least 3.
num.partitions
For automatically created topics it’s default value is 1. You can change based on requirements.
delete.topic.enable
This allows users to delete a topic from Kafka using the admin tool, if this property is turned off then Deleting a topic through the admin tool will have no effect. By default this feature is turned off (set to false).
3. Log Settings
log.roll.hours
The maximum time, in hours, before a new log segment is rolled out. The default value is 168 hours (seven days).
This setting controls the period of time after which Kafka will force the log to roll, even if the segment file is not full. This ensures that the retention process is able to delete or compact old data.
log.retention.hours
The number of hours to keep a log file before deleting it. The default value is 168 hours (seven days).
log.dirs
A comma-separated list of directories in which log data is kept. If you have multiple disks, list all directories under each disk.
log.retention.bytes
The amount of data to retain in the log for each topic partition. By default, log size is unlimited.
If log.retention.hours and log.retention.bytes are both set, Kafka deletes a segment when either limit is exceeded.
log.segment.bytes
The log for a topic partition is stored as a directory of segment files. This setting controls the maximum size of a segment file before a new segment is rolled over in the log. The default is 1 GB.
Log Flush Management
log.flush.interval.messages
Specifies the number of messages to accumulate on a log partition before Kafka forces a flush of data to disk.
log.flush.scheduler.interval.ms
Specifies the amount of time (in milliseconds) after which Kafka checks to see if a log needs to be flushed to disk.
log.segment.bytes
Specifies the size of the log file. Kafka flushes the log file to disk whenever a log file reaches its maximum size.
log.roll.hours
Specifies the maximum length of time before a new log segment is rolled out (in hours); this value is secondary to log.roll.ms. Kafka flushes the log file to disk whenever a log file reaches this time limit.
4. Compacting Settings
log.cleaner.dedupe.buffer.size
Specifies total memory used for log de-duplication across all cleaner threads.
By default, 128 MB of buffer is allocated.
log.cleaner.io.buffer.size
Specifies the total memory used for log cleaner I/O buffers across all cleaner threads. By default, 512 KB of buffer is allocated.
5. General Broker Settings
auto.leader.rebalance.enable
Enables automatic leader balancing, default is enabled.
unclean.leader.election.enable
This property allows you to specify a preference of availability or durability. This is an important setting: If availability is more important than avoiding data loss, ensure that this property is set to true. If preventing data loss is more important than availability, set this property to false.
This property is set to true by default, which favors availability.
controlled.shutdown.enable
Enables controlled shutdown of the server. The default is enabled.
min.insync.replicas
When a producer sets acks to “all”, min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception.
You should set min.insync.replicas to 2 for replication factor equal to 3.
message.max.bytes
Specifies the maximum size of message that the server can receive.
broker.rack
The rack awareness feature distributes replicas of a partition across different racks.
2. Producer Settings
The lifecycle of a request from producer to broker involves several configuration settings:
The producer polls for a batch of messages from the batch queue, one batch per partition. A batch is ready when one of the following is true:a. batch.size is reached. Note: Larger batches typically have better compression ratios and higher throughput, but they have higher latency.
a. batch.size is reached. Note: Larger batches typically have better compression ratios and higher throughput, but they have higher latency.
b. linger.ms (time-based batching threshold) is reached. Note: There is no simple guideline for setting linger.ms values; you should test settings on specific use cases. For small events (100 bytes or less), this setting does not appear to have much impact.
It accepts standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’), as well as ‘uncompressed’ (the default, equivalent to no compression).
acks
The acks setting specifies acknowledgments that the producer requires the leader to receive before considering a request complete. This setting defines the durability level for the producer. if Acks = 0; it means High Throughput , Low latency if Acks = 1; it means medium Throughput , medium latency if Acks = -1; it means low Throughput , High latency
flush() : which makes all buffered records immediately available to send (even if linger.ms is greater than 0).
3. Consumer Settings
One basic guideline for consumer performance is to keep the number of consumer threads equal to the partition count.
4. Zookeeper Configuration with Kafka
Some recommendations :
Do not run ZooKeeper on a server where Kafka is running.
Make sure you allocate sufficient JVM memory. A good starting point is 4GB.
To monitor the ZooKeeper instance, use JMX metrics.
When using ZooKeeper with Kafka you should dedicate ZooKeeper to Kafka, and not use ZooKeeper for any other components.
Summary
In this article we saw some configuration settings of Kafka components for it to run with high performance, we saw some recommended settings and what each setting means. I Hope you liked the article !
Iterators are used to iterate or traverse over collections in java. The iterators can be fail-fast and fail-safe. Fail-fast iterators throw exceptions at runtime (called the ConcurrentModificationException) if the collection is being modified while iterating over it. Fail-safe iterators are those which do not throw exception if being modified while iterating because they work over the clone of the collection rather than working on the actual collection.
Iterator which iterate on HashMap, ArrayList classes are some examples of fail-fast Iterator. Iterator which iterate on ConcurrentHashMap, CopyOnWriteArrayList classes are examples of fail-safe Iterator.
Understand with an Example !
1. Example of Fail-fast iterator
import java.util.ArrayList;
import java.util.Iterator;
public class FailFastIteratorTest {
public static void main(String[] args) {
ArrayList<String> list = new ArrayList<>();
list.add("john1");
list.add("john2");
list.add("john3");
list.add("john4");
list.add("john5");
System.out.println(list);
Iterator<String> iterator = list.iterator();
while (iterator.hasNext()){
if(iterator.next().equals("john3"))
{
list.remove("john3");
}
}
System.out.println(list);
}
}
OUTPUT
[john1, john2, john3, john4, john5]
Exception in thread "main" java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
Problem
The problem here is that you are trying to remove an element “john3” from the list while iterating over it therefore this will lead to ConcurrentModificationException.
How to overcome it ?
So instead of list.remove(“john3”) use iterator.remove() method in the condition
while (iterator.hasNext()){
if(iterator.next().equals("john3"))
{
iterator.remove();
}
}
Using iterator’s remove() method you will not get any such Exception.
Important Fact :
If you wish to use iterators for traversing a collection and then you intend to remove elements from that collection while iteration, then prefer using iterators remove() method as it doesn’t throw ConcurrentModificationException.
There is another way to avoid exception, use fail-safe iterator collections instead, we will look into it below.
2. Example of Fail-safe iterator
As we read earlier, the fail-safe iterators do not throw concurrent modification exception because they work or iterate on the clone of the collection and not on the actual.
But, There are a couple of drawbacks:
Fail safe iterators does not always guarantee updated data, after the iterator is made if any modification is done on the collection will not reflect in the iterator as it works on the clone.
One overhead is of memory and time used in creating a clone of collection to work on.
I. Example of CopyOnWriteArrayList :
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
public class FailSafeIteratorTest {
public static void main(String[] args) {
List<String> list = new CopyOnWriteArrayList<>();
list.add("john1");
list.add("john2");
list.add("john3");
list.add("john4");
list.add("john5");
System.out.println(list);
Iterator<String> iterator = list.iterator();
while (iterator.hasNext()) {
if (iterator.next().equals("john3")) {
list.remove("john3");
}
}
System.out.println(list);
}
}
Here you will observe, on using fail safe iterator on CopyOnWriteArrayList it will not throw any exception.
II. Example of ConcurrentHashMap :
import java.util.Iterator;
import java.util.concurrent.ConcurrentHashMap;
public class FailSafeIteratorTest {
public static void main(String[] args) {
ConcurrentHashMap<Integer,String> map = new ConcurrentHashMap<>();
map.put(1,"one");
map.put(2,"two");
map.put(3,"three");
map.put(4,"four");
System.out.println(map);
Iterator<Integer> iterator = map.keySet().iterator();
while (iterator.hasNext()) {
int key = iterator.next();
System.out.println(key + " : " + map.get(key));
map.put(5,"five");
}
System.out.println(map);
}
}
OUTPUT
{1=one, 2=two, 3=three, 4=four}
1 : one
2 : two
3 : three
4 : four
5 : five
{1=one, 2=two, 3=three, 4=four, 5=five}
Here you will observe, on using fail safe iterator on ConcurrentHashMap it will not throw any exception.
3. Difference between Fail-fast and Fail-safe iterators
Fail-fast Iterator
Fail-safe Iterator
This kind of iterator throws ConcurrentModificationException when iterating over a collection.
This kind of iterator does not throw Exception when iterating over a collection.
It iterates on the original collection.
It iterates over the copy of original collection and not on the actual collection.
There is no extra memory and time overhead like fail safe iterators as they operate on actual collection.
It has an overhead of extra memory and time as it works over the copy of actual collection.
Examples are HashMap, ArrayList.
Examples are ConcurrentHashMap, CopyOnWriteArrayList
4. How Iterators work internally ?
Initially in the internal implementation a variable called expectedModCount is equal to modCount which is zero.
int expectedModCount = modCount;
If there is any change done in the collection, the modCOunt will change and then an exception is thrown, by calling a method checkForComodification().
final void checkForComodification() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
}
Summary
In this article we learnt about Fail-fast and Fail-safe iterators, we also learnt them with examples, it’s differences and internal working. Hope you liked the article !
In REST there is no strict naming rule conventions but there are certain guidelines which ensure that our webservices api URLs are easy to read and understand.
We are free to implement it in any way we want to.
Naming guidelines of REST WebServices
Just remember Nothing is right or wrong here in naming REST URIs as these are not the rules but best practices.
1. Simple Names
As such nothing specific, but the naming of REST API should be self-describing and simple.
Example
/users/12345
/api?type=user&id=12345
2. Use Nouns not verbs
Your URI should refer to a thing (a noun) and not an action (verb).
Example
http://www.programmertoday.com/rest/v1/users
http://www.programmertoday.com/rest/v1/users/{userId}
http://www.programmertoday.com/rest/v1/users/{userId}/orders
http://www.programmertoday.com/rest/v1/users/{userId}/orders/{order-id}
http://www.programmertoday.com/rest/v1/orders/{order-id}
http://www.programmertoday.com/rest/v1/products/{product-id}
we must avoid using uri names like below:
http://www.programmertoday.com/rest/v1/getUsers
http://www.programmertoday.com/rest/v1/getProducts
3. Try using Plural Nouns
Though name can be in singular noun as well but well recommended is to use plural nouns.
Example
/employees which represents all employees /employees/{emp-id} represents a particular employee
Always version your API. Versioning helps you iterate faster and prevents invalid requests from hitting updated endpoints. It also helps smooth over any major API version transitions as you can continue to offer old API versions for a period of time.
Example
HTTP GET : http://domain/rest/v1/products/{product-id}
HTTP GET : http://domain/rest/v2/products/{product-id}
Summary
In this tutorial, we learnt about the REST Webservices naming guidelines which one should follow not as a rule but as a good practice. I hope you liked it !
The Statelessness means not having any state, REST means Representational State Transfer, which boils down to the same things that the server hosting REST does not store any state about the client session on its side.
Each request from the client to server must contain all of the information necessary to understand the request, and it cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. Client is responsible for storing and handling all application state related information on client side.
To maintain statelessness, one should not store authorization details of client on the server, each request to the server should contain all the required details. As each request is considered to be a new request.
Advantages of REST being Stateless :
It has drastically reduced the code – by reducing the code of server side snychronization logic.
East to scale up, as there are no sessions to maintain so any server can handle multiple requests.
Easy to cache as well.
Server knows about each request by each client as the client carries all required information with each request, and can be tracked.
Web services need not maintain the client’s previous interactions.
Summary
In this tutorial we learnt about the statelessness of REST apis and its advantages. I hope you liked it !
Browsers do support PUT and DELETE, but it is HTML that doesn’t.
This is because HTML 4.01 and the final W3C HTML 5.0 spec both say that the only HTTP methods that their form elements should allow are GET and POST.
Web pages trying to use forms with method="PUT" or method="DELETE" would fall back to the default method, GET for all current browsers. This breaks the web applications’ attempts to use appropriate methods in HTML forms for the intended action, and ends up giving a worse result — GET being used to delete things!
PUT as a form method makes no sense, you wouldn’t want to PUT a form payload. DELETE only makes sense if there is no payload, so it doesn’t make much sense with forms either.
Conclusion
If using web browser to test a rest api of http method PUT or DELETE, it will not work and would fall back to default GET method and hence the REST API URL will not give proper response.
In this tutorial of Spring boot Exception Handling we will see how to create own custom exception handler and a mechanism to handle various kinds of exceptions in REST endpoints.
And in the below exception handler you can add your own thought of methods to handle exceptions.
Let’s Begin
1. RestController – Create
@GetMapping("/dependant/dept-id/{id}")
@ResponseBody
public Dependant getDependantbyId(@PathVariable int id){
if(id>10){
throw new MyCustomException("the id is not in range");
}
Dependant dept = new Dependant();
dept = dependantRepository.findBydeptid(id);
return dept;
}
2. Custom Exception Class – Create
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
@ResponseStatus(value = HttpStatus.NOT_FOUND)
public class MyCustomException extends RuntimeException {
public MyCustomException(String message){
super(message);
}
}
3. Controller Advice – Custom Exception Handler
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import org.springframework.web.context.request.WebRequest;
import org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;
@ControllerAdvice
public class CustomExceptionHandler extends ResponseEntityExceptionHandler {
@ExceptionHandler(Exception.class)
public final ResponseEntity<Object> handleAllExceptions(Exception ex, WebRequest request) {
System.out.println("Inside handleAllExceptions() method of CustomExceptionHandler");
return new ResponseEntity("This is a General Exception....", HttpStatus.INTERNAL_SERVER_ERROR);
}
@ExceptionHandler(MyCustomException.class)
public final ResponseEntity<Object> handleUserNotFoundException(MyCustomException ex, WebRequest request) {
System.out.println("Inside handleUserNotFoundException() method of CustomExceptionHandler method");
return new ResponseEntity("This is MyCustomException.....", HttpStatus.NOT_FOUND);
}
}
OUTPUT
GET API : http://localhost:8080/api/dependant/dept-id/11
Considering id=11 is not present in the database
We handle this exception using Handler – below is the output of the above code
Status : 404 Not Found
This is MyCustomException…..
Summary
In this tutorial, we learnt how to create own custom exception handler and a mechanism to handle various kinds of exceptions in REST endpoints.
We Used spring annotations to achieve that using @ExceptionHandler, @ControllerAdvice.