Learn Python practically and Get Certified .

Popular Tutorials

Popular examples, reference materials, learn python interactively, dsa introduction.

  • What is an algorithm?
  • Data Structure and Types
  • Why learn DSA?
  • Asymptotic Notations
  • Master Theorem
  • Divide and Conquer Algorithm

Data Structures (I)

  • Types of Queue
  • Circular Queue
  • Priority Queue

Data Structures (II)

  • Linked List
  • Linked List Operations
  • Types of Linked List
  • Heap Data Structure
  • Fibonacci Heap
  • Decrease Key and Delete Node Operations on a Fibonacci Heap

Tree based DSA (I)

  • Tree Data Structure
  • Tree Traversal
  • Binary Tree
  • Full Binary Tree
  • Perfect Binary Tree
  • Complete Binary Tree
  • Balanced Binary Tree
  • Binary Search Tree

Tree based DSA (II)

  • Insertion in a B-tree
  • Deletion from a B-tree
  • Insertion on a B+ Tree
  • Deletion from a B+ Tree
  • Red-Black Tree
  • Red-Black Tree Insertion
  • Red-Black Tree Deletion

Graph based DSA

  • Graph Data Structure
  • Spanning Tree
  • Strongly Connected Components
  • Adjacency Matrix
  • Adjacency List
  • DFS Algorithm
  • Breadth-first Search
  • Bellman Ford's Algorithm

Sorting and Searching Algorithms

  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Counting Sort
  • Bucket Sort

Linear Search

Binary Search

Greedy algorithms.

  • Greedy Algorithm
  • Ford-Fulkerson Algorithm
  • Dijkstra's Algorithm
  • Kruskal's Algorithm
  • Prim's Algorithm
  • Huffman Coding
  • Dynamic Programming
  • Floyd-Warshall Algorithm
  • Longest Common Sequence

Other Algorithms

  • Backtracking Algorithm
  • Rabin-Karp Algorithm

DSA Tutorials

Quicksort Algorithm

Binary Search Tree(BST)

Insertion Sort Algorithm

Binary Search is a searching algorithm for finding an element's position in a sorted array.

In this approach, the element is always searched in the middle of a portion of an array.

Binary search can be implemented only on a sorted list of items. If the elements are not sorted already, we need to sort them first.

  • Binary Search Working

Binary Search Algorithm can be implemented in two ways which are discussed below.

  • Iterative Method

Recursive Method

The recursive method follows the divide and conquer approach.

The general steps for both methods are discussed below.

initial array Binary Search

  • If x == mid, then return mid.Else, compare the element to be searched with m.
  • If x > mid , compare x with the middle element of the elements on the right side of mid . This is done by setting low to low = mid + 1 .

finding mid element Binary Search

  • Binary Search Algorithm

Iteration Method

  • Python, Java, C/C++ Examples (Iterative Method)
  • Python, Java, C/C++ Examples (Recursive Method)
  • Binary Search Complexity

Time Complexities

  • Best case complexity : O(1)
  • Average case complexity : O(log n)
  • Worst case complexity : O(log n)

Space Complexity

The space complexity of the binary search is O(1) .

  • Binary Search Applications
  • In libraries of Java, .Net, C++ STL
  • While debugging, the binary search is used to pinpoint the place where the error happens.

Table of Contents

Sorry about that.

Related Tutorials

DS & Algorithms

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Computer science theory

Course: computer science theory   >   unit 1, binary search.

  • Implementing binary search of an array
  • Challenge: Binary search
  • Running time of binary search

Describing binary search

  • Let m i n = 1 ‍   and m a x = n ‍   .
  • Guess the average of  m a x ‍    and m i n ‍   , rounded down so that it is an integer.
  • If you guessed the number, stop. You found it!
  • If the guess was too low, set m i n ‍   to be one larger than the guess.
  • If the guess was too high, set m a x ‍   to be one smaller than the guess.
  • Go back to step two.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Ace your Coding Interview

  • DSA Problems
  • Binary Tree
  • Binary Search Tree
  • Dynamic Programming
  • Divide and Conquer
  • Linked List
  • Backtracking

Binary Search Algorithm – Iterative and Recursive Implementation

Given a sorted array of n integers and a target value, determine if the target exists in the array in logarithmic time using the binary search algorithm. If target exists in the array, print the index of it.

For example,

Practice this problem

A simple solution would be to perform a linear search on the given array. It sequentially checks each array element for the target value until a match is found or all the elements have been searched. The worst-case time complexity of this approach is O(n) as it makes at most n comparisons, where n is the size of the input. This approach doesn’t take advantage of the fact that the input is sorted.

How to perform better?

The idea is to use binary search which is a Divide and Conquer algorithm. Like all divide-and-conquer algorithms, binary search first divides a large array into two smaller subarrays and then recursively (or iteratively) operate the subarrays. But instead of working on both subarrays, it discards one subarray and continues on the second subarray. This decision of discarding one subarray is made in just one comparison.

  So binary search reduces the search space to half at each step. By search space, we mean a subarray of the given array where the target value is located (if present in the array). Initially, the search space is the entire array, and binary search redefines the search space at every step of the algorithm by using the property of the array that it is sorted. It does so by comparing the mid-value in the search space to the target value. If the target value matches the middle element, its position in the array is returned; otherwise, it discards half of the search space based on the comparison result.

  Let’s track the search space by using two indexes – start and end . Initially, start = 0 and end = n-1 (as initially, the whole array is our search space). At each step, find the mid-value in the search space and compares it with the target. There are three possible cases:

  • If target = nums[mid] , return mid .
  • If target < nums[mid] , discard all elements in the right search space, including the middle element, i.e., nums[mid…high] . Now our new high would be mid-1 .
  • If target > nums[mid] , discard all elements in the left search space, including the middle element, i.e., nums[low…mid] . Now our new low would be mid+1 .

Repeat the process until the target is found, or our search space is exhausted. Let’s understand this by taking an example. Let

Binary Search – Step 1

1. Iterative Implementation

The algorithm can be implemented iteratively as follows in C, Java, and Python:

Download    Run Code

Output: Element found at index 1

2. Recursive Implementation

We can easily convert the above iterative version of the binary search algorithm into a recursive one. The algorithm can be implemented recursively as follows in C, Java, and Python:

Performance of Binary Search

We know that at each step of the algorithm, our search space reduces to half. That means if initially, our search space contains n elements, then after one iteration it contains n/2 , then n/4 and so on…

Suppose our search space is exhausted after k steps. Then,

Therefore, the time complexity of the binary search algorithm is O(log 2 n) , which is very efficient. The auxiliary space required by the program is O(1) for iterative implementation and O(log 2 n) for recursive implementation due to call stack.

Avoid Integer Overflow

The signed int in C/C++ takes up 4 bytes of storage, i.e., It allows storage for integers between -2147483648 and 2147483647 . Note that some compilers might take up 2 bytes storage as well. The exact value can be found by sizeof(int) .

So, if (low + high) > 2147483647 , integer overflow will happen.

To avoid integer overflow, we can use any of the following expressions:

Now, low + (high - low) / 2 or high - (high - low) / 2 always computes a valid index halfway between high and low , and overflow will never happen even for extreme values.

  Suggested Read:

Binary Search in C++ STL and Java Collections
Ternary Search vs Binary search

Rate this post

Average rating 4.85 /5. Vote count: 346

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Tell us how we can improve this post?

Thanks for reading.

To share your code in the comments, please use our online compiler that supports C, C++, Java, Python, JavaScript, C#, PHP, and many more popular programming languages.

Like us? Refer us to your friends and support our growth. Happy coding :)

guest

Ensure that you are logged in and have the required permissions to access the test.

A server error has occurred. Please refresh the page or try after some time.

An error has occurred. Please refresh the page or try after some time.

Signup and get free access to 100+ Tutorials and Practice Problems Start Now

Algorithms

  • Linear Search

Binary Search

  • Ternary Search
  • Bubble Sort
  • Selection Sort
  • Insertion Sort
  • Counting Sort
  • Bucket Sort
  • Basics of Greedy Algorithms
  • Graph Representation
  • Breadth First Search
  • Depth First Search
  • Minimum Spanning Tree
  • Shortest Path Algorithms
  • Flood-fill Algorithm
  • Articulation Points and Bridges
  • Biconnected Components
  • Strongly Connected Components
  • Topological Sort
  • Hamiltonian Path
  • Maximum flow
  • Minimum Cost Maximum Flow
  • Basics of String Manipulation
  • String Searching
  • Z Algorithm
  • Manachar’s Algorithm
  • Introduction to Dynamic Programming 1
  • 2 Dimensional
  • State space reduction
  • Dynamic Programming and Bit Masking

Binary search is the most popular Search algorithm.It is efficient and also one of the most commonly used techniques that is used to solve problems.

If all the names in the world are written down together in order and you want to search for the position of a specific name, binary search will accomplish this in a maximum of $$35$$ iterations.

Binary search works only on a sorted set of elements. To use binary search on a collection, the collection must first be sorted.

When binary search is used to perform operations on a sorted set, the number of iterations can always be reduced on the basis of the value that is being searched.

Let us consider the following array:

enter image description here

By using linear search, the position of element 8 will be determined in the $$9^{th}$$ iteration.

Let's see how the number of iterations can be reduced by using binary search. Before we start the search, we need to know the start and end of the range. Lets call them Low and High .

Now, compare the search value $$K$$ with the element located at the median of the lower and upper bounds. If the value $$K$$ is greater, increase the lower bound, else decrease the upper bound.

enter image description here

Referring to the image above, the lower bound is $$0$$ and the upper bound is $$9$$. The median of the lower and upper bounds is (lower_bound + upper_bound) / 2 = 4 . Here a[4] = 4 . The value 4>2, which is the value that you are searching for. Therefore, we do not need to conduct a search on any element beyond 4 as the elements beyond it will obviously be greater than 2.

Therefore, we can always drop the upper bound of the array to the position of element 4. Now, we follow the same procedure on the same array with the following values:

Repeat this procedure recursively until Low > High . If at any iteration, we get $$a[mid]= key$$, we return value of $$mid$$. This is the position of $$key$$ in the array. If $$key$$ is not present in the array, we return $$-1$$.

Implementation:

Time complexity

As we dispose off one part of the search case during every step of binary search, and perform the search operation on the other half, this results in a worst case time complexity of $$O(log _{2} N) $$.

Google

  • An alphabet
  • A special character
  • Minimum 8 characters

A password reset link will be sent to the following email id

Introduction to Divide and Conquer With Binary Search

This lesson introduces the divide and conquer technique of solving a problem using binary search in a sorted list.

Binary search method

  • Visualization
  • Explanation
  • Time complexity
Divide and conquer is an algorithmic paradigm in which the problem is repeatedly divided into subproblems until we reach a point where each problem is similar and atomic , i.e., can’t be further divided. At this point, we start solving these atomic problems and combining (merging) the solutions. So, divide and conquer solutions have the following three steps: Divide Conquer Merge

Let’s take an example to grasp this concept better.

Consider a sorted list lst , of n integers. We are required to find if a particular integer value exists in the given list or not.

Create a free account to view this lesson.

By signing up, you agree to Educative's Terms of Service and Privacy Policy

Search Algorithms – Linear Search and Binary Search Code Implementation and Complexity Analysis

Ashutosh Krishna

Search algorithms are a fundamental computer science concept that you should understand as a developer. They work by using a step-by-step method to locate specific data among a collection of data.

In this article, we'll learn how search algorithms work by looking at their implementations in Java and Python.

What is a Search Algorithm?

According to Wikipedia, a search algorithm is:

Any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values.

Search algorithms are designed to check or retrieve an element from any data structure where that element is being stored. They search for a target (key) in the search space.

Types of Search Algorithms

In this post, we are going to discuss two important types of search algorithms:

Linear or Sequential Search

Binary search.

Let's discuss these two in detail with examples, code implementations, and time complexity analysis.

This algorithm works by sequentially iterating through the whole array or list from one end until the target element is found. If the element is found, it returns its index, else -1.

Now let's look at an example and try to understand how it works:

Suppose the target element we want to search is   7 .

Approach for Linear or Sequential Search

  • Start with index 0 and compare each element with the target
  • If the target is found to be equal to the element, return its index
  • If the target is not found, return -1

Code Implementation

Let's now look at how we'd implement this type of search algorithm in a couple different programming languages.

Linear or Sequential Search in Java

Linear or sequential search in python, time complexity analysis.

The Best Case occurs when the target element is the first element of the array. The number of comparisons, in this case, is 1. So, the time complexity is O(1) .

The Average Case: On average, the target element will be somewhere in the middle of the array. The number of comparisons, in this case, will be N/2. So, the time complexity will be O(N) (the constant being ignored).

The Worst Case occurs when the target element is the last element in the array or not in the array. In this case, we have to traverse the entire array, and so the number of comparisons will be N. So, the time complexity will be O(N) .

This type of searching algorithm is used to find the position of a specific value contained in a sorted array . The binary search algorithm works on the principle of divide and conquer and it is considered the best searching algorithm because it's faster to run.

Now let's take a sorted array as an example and try to understand how it works:

Suppose the target element to be searched is   17 .

Approach for Binary Search

  • Compare the target element with the middle element of the array.
  • If the target element is greater than the middle element, then the search continues in the right half.
  • Else if the target element is less than the middle value, the search continues in the left half.
  • This process is repeated until the middle element is equal to the target element, or the target element is not in the array
  • If the target element is found, its index is returned, else -1 is returned.

Binary Search in Java

Binary search in python.

The Best Case occurs when the target element is the middle element of the array. The number of comparisons, in this case, is 1. So, the time complexity is O(1) .

The Average Case: On average, the target element will be somewhere in the array. So, the time complexity will be O(logN) .

The Worst Case occurs when the target element is not in the list or it is away from the middle element. So, the time complexity will be O(logN) .

How to Calculate Time Complexity:

Let's say the iteration in Binary Search terminates after k iterations.

At each iteration, the array is divided by half. So let’s say the length of array at any iteration is N .

At Iteration 1,

At Iteration 2 ,

At Iteration 3 ,

At Iteration k ,

Also, we know that after k divisions, the length of the array becomes 1 : Length of array = N⁄ 2 k = 1 => N = 2 k

If we apply a log function on both sides: log 2 (N) = log 2 (2 k ) => log 2 (N) = k log 2 (2)

As (log a (a) = 1) Therefore,=> k = log 2 (N)

So now we can see why the time complexity of Binary Search is log 2 (N).

You can also visualize the above two algorithms using the simple tool built by Dipesh Patil - Algorithms Visualizer .

Order-agnostic Binary Search

Suppose, we have to find a target element in a sorted array. We know that the array is sorted, but we don’t know if it’s sorted in ascending or descending order.

Approach for Order-agnostic Binary Search

The implementation is similar to binary search except that we need to identify whether the array is sorted in ascending order or descending order. This then lets us make the decision about whether to continue the search in the left half of the array or the right half of the array.

  • We first compare the target with the middle element
  • If the array is sorted in ascending order and the target is less than the middle element OR the array is sorted in descending order and the target is greater than the middle element, then we continue the search in the lower half of the array by setting end=mid-1 .
  • Otherwise, we perform the search in the upper half of the array by setting start=mid+1

The only thing we need to do is to figure out whether the array is sorted in ascending order or descending order. We can easily find this by comparing the first and last elements of the array.

Order-agnostic Binary Search in Java

Order-agnostic binary search in python.

There will be no change in the time complexity, so it will be the same as Binary Search.

In this article, we discussed two of the most important search algorithms along with their code implementations in Python and Java. We also looked at their time complexity analysis.

Thanks for reading!

Application Developer at Thoughtworks India

If you read this far, thank the author to show them you care. Say Thanks

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

Codementor Community

  • Data Engineering
  • Machine Learning
  • RESTful API
  • React Native
  • Elasticsearch
  • Ruby on Rails

Codementor Events

Solving Problems with Binary Search

Solving Problems with Binary Search

Binary search is a lot more than just a way to find elements in a sorted array. In this tutorial, I will help you understand binary search better by going through some basic problems then applying them in technical questions asked during interviews.

Beyond arrays: the discrete binary search

Problem: Finding a value in a sorted sequence

A sequence or array is really just a function which associates integers ( that is indices of array) with the corresponding values in an array. However, there is no reason to restrict our usage of binary search to just sequences. In fact, we can use the same binary search algorithm on any monotonic function whose domain is the set of integers. The only difference is, we will use function evaluation instead of array lookups. We will find x for which f(x) is equal to some target value. When working with arrays, time complexity is O(log n ). But in this problem, it may change because we need to evaluate the function ( f(x) ) at every step although we will be free of any memory constraints that were present while working with arrays.

Which problem can be solved using Binary search?

If we talk in a less mathematical way, try to break the problem in a yes or no question. If the problem follows the structure below, then binary search can be used (don't worry if you don't get it clearly, example problems will make it clearer).

If you answered yes for some potential solution, x means that you’d also get a yes answer for any element after x. Similarly, if you got no, you’d get a no answer for any element before x. As a consequence, if you were to ask the question for each element in the search space (in order), you would get a series of no answers followed by a series of yes answers.

It can be easily observed that Yes and No can swap places in the description above, which means we will have a series of yeses followed by nos. In this case, yes for any x will mean that we will get yes for all elements before x and no for any x will mean that we will get no for all elements after x.

Example problems

Still confusing? Let's discuss some problems which will make the method clearer. Test your understanding of binary search by clicking the link to the problem, then once you're done answering, click the solution to the code. The explanation for each solution is provided in this tutorial.

Problem 1: ( Link to problem | Solution Code )

Explanation:

  • For the problem at hand, let us define a function F(x) such that. F(x) = 1 if it is possible to arrange the cows in stalls such that the distance between any two cows is at least x F(x) = 0 otherwisex
  • Now it is easy to see that if F(x)=0, F(y)=0 for all y>x. Thus, the problem satisfies the monotonicity condition necessary for binary search.
  • Since we have at least two cows, the best we can do is to push them towards the stalls at the end—so there is no way we can achieve this better. Hence, F(maxbarnpos-minbarnpos+1)=0.
  • Now, how do we check whether F(x)=1 for the general value of x? We can do this with a greedy algorithm : Keep placing cows at the leftmost possible stalls such that they are at least x distance away from the last cow placed. Assuming that the array pos containing positions of stalls has been sorted

The main function will look like:

Problem 2: Asked by Google ( Link to Problem | Solution Code )

  • Again we will design a function F(x) F(x) = 1 if it is possible to paint the boards in x time. F(x) = 0 otherwise.
  • As we can observe easily, that if boards can be painted in x time, then F(x) = 1 and also F(y) = 1 for all y >= x , Hence, we can use binary search for finding minimum x such that F(x) = 1.
  • For writing the IsPossible(x) function, we can just start allocating painters to Boards such that it's taken one painter x time, at most, to paint. If the number of allocated painters is not more than the available, then it is possible to paint the boards in x time else it is not.
  • We use binary search over the answer with limits (0, maximum time it can take), then we find if isPossible(x). If it is possible, then we search for the answer in the left half, else we go to the right half.

Problem 3: Asked by Google ( Link To Problem | Solution Code )

  • The approach to this problem is pretty much the same as last one. 2 pseudo-code

Wrapping up

If you find any problem that can be solved with Binary search share them on the comments section so we can add them to this list.

Related tutorials you might be interested in

  • Graph Algorithms: Basic Guide for Your Next Technical Interview
  • 3 Essential Algorithm Examples You Should Know
  • Computer Algorithms Explained: Learning through Spotify

Enjoy this post? Give Rishabh Daal a like if it's helpful.

post comments

Leave a like and comment for Rishabh

Markdown

I found that using https://youtube.com/playlist?list=PL1MJrDFRFiKb-0XR5DIdbz5CaQH6FjWvo and follow through this playlist will really give me a good understanding about Binary Search.

you just copy pasted from this quora answer https://www.quora.com/What-is-the-correct-approach-to-solve-the-SPOJ-problem-Aggressive-cow/answer/Raziman-T-V?ch=10&share=c73687f8&srid=po1HB what the hell ??

Really Thanks

problem solving using binary search algorithm

  • Table of Contents
  • Course Home
  • Assignments
  • Peer Instruction (Instructor)
  • Peer Instruction (Student)
  • Change Course
  • Instructor's Page
  • Progress Page
  • Edit Profile
  • Change Password
  • Scratch ActiveCode
  • Scratch Activecode
  • Instructors Guide
  • About Runestone
  • Report A Problem
  • 6.1 Objectives
  • 6.2 Searching
  • 6.3 The Sequential Search
  • 6.4 The Binary Search
  • 6.5 Hashing
  • 6.7 Summary
  • 6.8 Discussion Questions
  • 6.9 Programming Exercises
  • 6.10 Glossary
  • 6.3. The Sequential Search" data-toggle="tooltip">
  • 6.5. Hashing' data-toggle="tooltip" >

6.4. The Binary Search ¶

It is possible to take greater advantage of the ordered vector if we are clever with our comparisons. In the sequential search, when we compare against the first item, there are at most \(n-1\) more items to look through if the first item is not what we are looking for. Instead of searching the vector in sequence, a binary search will start by examining the middle item. If that item is the one we are searching for, we are done. If it is not the correct item, we can use the ordered nature of the vector to eliminate half of the remaining items. If the item we are searching for is greater than the middle item, we know that the entire lower half of the vector as well as the middle item can be eliminated from further consideration. The item, if it is in the vector, must be in the upper half.

We can then repeat the process with the upper half. Start at the middle item and compare it against what we are looking for. Again, we either find it or split the vector in half, therefore eliminating another large part of our possible search space. Figure 3 shows how this algorithm can quickly find the value 54. The complete function is shown in CodeLens 3 .

../_images/binsearch.png

Figure 3: Binary Search of an Ordered vector of Integers ¶

Activity: CodeLens Binary Search of an Ordered List (search3)

A similar implementation can be carried out using vectors in C++.

Before we move on to the analysis, we should note that this algorithm is a great example of a divide and conquer strategy. Divide and conquer means that we divide the problem into smaller pieces, solve the smaller pieces in some way, and then reassemble the whole problem to get the result. When we perform a binary search of a list, we first check the middle item. If the item we are searching for is less than the middle item, we can simply perform a binary search of the left half of the original list. Likewise, if the item is greater, we can perform a binary search of the right half. Either way, this is a recursive call to the binary search function passing a smaller list. CodeLens 4 shows this recursive version.

Activity: CodeLens A Binary Search--Recursive Version (search4)

There is a vector initializer within C++ that can be used much like python slices, however this can only be used when new vectors are created.

6.4.1. Analysis of Binary Search ¶

To analyze the binary search algorithm, we need to recall that each comparison eliminates about half of the remaining items from consideration. What is the maximum number of comparisons this algorithm will require to check the entire list? If we start with n items, about \(\frac{n}{2}\) items will be left after the first comparison. After the second comparison, there will be about \(\frac{n}{4}\) . Then \(\frac{n}{8}\) , \(\frac{n}{16}\) , and so on. How many times can we split the list? Table 3 helps us to see the answer.

When we split the list enough times, we end up with a list that has just one item. Either that is the item we are looking for or it is not. Either way, we are done. The number of comparisons necessary to get to this point is i where \(\frac {n}{2^i} =1\) . Solving for i gives us \(i=\log n\) . The maximum number of comparisons is logarithmic with respect to the number of items in the list. Therefore, the binary search is \(O(\log n)\) .

One additional analysis issue needs to be addressed. In the recursive solution shown above, the recursive call,

binarySearch(alist[:midpoint],item)

uses the slice operator to create the left half of the list that is then passed to the next invocation (similarly for the right half as well). The analysis that we did above assumed that the slice operator takes constant time. This means that the binary search using slice will not perform in strict logarithmic time. Luckily this can be remedied by passing the list along with the starting and ending indices. The indices can be calculated as we did in Listing 3 . This is especially relevant in C++, where we are initializing a new vector for each split of our list. To truly optimize this algorithm, we could use an array and manually keep track of start and end indices of our array. Below is an example of such an implementation.

Even though a binary search is generally better than a sequential search, it is important to note that for small values of n , the additional cost of sorting is probably not worth it. In fact, we should always consider whether it is cost effective to take on the extra work of sorting to gain searching benefits. If we can sort once and then search many times, the cost of the sort is not so significant. However, for large lists, sorting even once can be so expensive that simply performing a sequential search from the start may be the best choice.

Q-7: Suppose you have the following sorted list [3, 5, 6, 8, 11, 12, 14, 15, 17, 18] and are using the recursive binary search algorithm. Which group of numbers correctly shows the sequence of comparisons used to find the key 8.

  • Looks like you might be guilty of an off-by-one error. Remember the first position is index 0.
  • Binary search starts at the midpoint and halves the list each time.
  • Binary search does not start at the beginning and search sequentially, its starts in the middle and halves the list after each compare.
  • It appears that you are starting from the end and halving the list each time.

Q-8: Suppose you have the following sorted list [3, 5, 6, 8, 11, 12, 14, 15, 17, 18] and are using the recursive binary search algorithm. Which group of numbers correctly shows the sequence of comparisons used to search for the key 16?

  • 11, 17, 14, 15
  • 18, 17, 14, 15
  • Remember binary search starts in the middle and halves the list.
  • 14, 12, 17, 15
  • Looks like you might be off by one, be careful that you are calculating the midpont using integer arithmetic.
  • 12, 17, 14, 15
  • Binary search starts at the midpoint and halves the list each time. It is done when the list is empty.
  • Python Basics
  • Interview Questions
  • Python Quiz
  • Popular Packages
  • Python Projects
  • Practice Python
  • AI With Python
  • Learn Python3
  • Python Automation
  • Python Web Dev
  • DSA with Python
  • Python OOPs
  • Dictionaries
  • Python Programs

Basic Programs

  • How to Add Two Numbers in Python - Easy Programs
  • Find Maximum of two numbers in Python
  • Python Program to Find the Factorial of a Number
  • Python Program for Simple Interest
  • Python Program for Compound Interest
  • Python Program to Check Armstrong Number

Array Programs

  • Python Program to Find Sum of Array
  • Python Program to Find Largest Element in an Array
  • Python Program for Array Rotation
  • Python Program for Reversal algorithm for array rotation
  • Python Program to Split the array and add the first part to the end
  • Python Program for Find remainder of array multiplication divided by n
  • Python Program to check if given array is Monotonic

List Programs

  • Python program to interchange first and last elements in a list
  • Python Program to Swap Two Elements in a List
  • How To Find the Length of a List in Python
  • Check if element exists in list in Python
  • Different ways to clear a list in Python
  • Reversing a List in Python

Matrix Programs

  • Adding and Subtracting Matrices in Python
  • Python Program to Add Two Matrices
  • Python program to multiply two matrices
  • Python | Matrix Product
  • Transpose a matrix in Single line in Python
  • Python | Matrix creation of n*n
  • Python | Get Kth Column of Matrix
  • Python - Vertical Concatenation in Matrix

String Programs

  • Python Program to Check if a String is Palindrome or Not
  • Python program to check whether the string is Symmetrical or Palindrome
  • Reverse Words in a Given String in Python
  • How to Remove Letters From a String in Python
  • Check if String Contains Substring in Python
  • Python - Words Frequency in String Shorthands

Dictionary Programs

  • Python | Ways to remove a key from dictionary
  • Python | Merging two Dictionaries
  • Python - Convert key-values list to flat dictionary
  • Python - Insertion at the beginning in OrderedDict
  • Python | Check order of character in string using OrderedDict( )
  • Python dictionary with keys having multiple inputs

Tuple Programs

  • Find the size of a Tuple in Python
  • Python - Maximum and Minimum K elements in Tuple
  • Python program to create a list of tuples from given list having number and its cube in each tuple
  • Python - Adding Tuple to List and vice - versa
  • Python - Closest Pair to Kth index element in Tuple
  • Python - Join Tuples if similar initial element

Searching and Sorting Programs

  • Python Program for Linear Search
  • Python Program for Bubble Sort
  • Python Program for Selection Sort
  • Python Program for Insertion Sort
  • Python Program for Recursive Insertion Sort

Python Program for Binary Search (Recursive and Iterative)

Pattern printing programs.

  • Program to print the pattern 'G'
  • Python | Print an Inverted Star Pattern
  • Python 3 | Program to print double sided stair-case pattern
  • Print with your own font using Python !!

Date-Time Programs

  • Python program to get Current Time
  • How to Get Current Date and Time using Python
  • Python | Find yesterday's, today's and tomorrow's date
  • Python program to convert time from 12 hour to 24 hour format
  • Python program to find difference between current time and given time
  • Python Program to Create a Lap Timer
  • Convert date string to timestamp in Python
  • How to convert timestamp string to datetime object in Python?
  • Find number of times every day occurs in a Year

Python Regex Programs

  • Python - Check if String Contain Only Defined Characters using Regex
  • Python program to Count Uppercase, Lowercase, special character and numeric values using Regex
  • The most occurring number in a string using Regex in python
  • Python Regex to extract maximum numeric value from a string
  • Regex in Python to put spaces between words starting with capital letters
  • Python - Check whether a string starts and ends with the same character or not (using Regular Expression)

Python File Handling Programs

  • Python program to read file word by word
  • Python program to read character by character from a file
  • Count number of lines in a text file in Python
  • How to remove lines starting with any prefix using Python?
  • Eliminating repeated lines from a file using Python
  • Read List of Dictionaries from File in Python

More Python Programs

  • Python Program to Reverse a linked list
  • Python Program for Find largest prime factor of a number
  • Python Program for Find sum of odd factors of a number
  • Python Program for Coin Change
  • Python Program for Tower of Hanoi
  • Python Program for Sieve of Eratosthenes

In a nutshell, this search algorithm takes advantage of a collection of elements that is already sorted by ignoring half of the elements after just one comparison. 

  • Compare x with the middle element.
  • If x matches with the middle element, we return the mid index.
  • Else if x is greater than the mid element, then x can only lie in the right (greater) half subarray after the mid element. Then we apply the algorithm again for the right half.
  • Else if x is smaller, the target x must lie in the left (lower) half. So we apply the algorithm for the left half.

Python Program for Binary Search Using Recursive

Time Complexity : O(log n)

Auxiliary Space : O(logn)     [NOTE: Recursion creates Call Stack]

Python Program for Binary Search Using Iterative  

Auxiliary Space : O(1)

Python Program for Binary Search Using the built-in bisect module

Step by step approach:

  • The code imports the bisect module which provides support for binary searching.
  • The binary_search_bisect() function is defined which takes an array arr and the element to search x as inputs.
  • The function calls the bisect_left() function of the bisect module which finds the position of the element in the sorted array arr where x should be inserted to maintain the sorted order. If the element is already present in the array, this function will return its position.
  • The function then checks if the returned index i is within the range of the array and if the element at that index is equal to x.
  • If the condition is true, then the function returns the index i as the position of the element in the array.
  • If the condition is false, then the function returns -1 indicating that the element is not present in the array.
  • The code then defines an array arr and an element x to search.
  • The binary_search_bisect() function is called with arr and x as inputs and the returned result is stored in the result variable.
  • The code then checks if the result is not equal to -1, indicating that the element is present in the array. If true, it prints the position of the element in the array.
  • If the result is equal to -1, then the code prints a message that the element is not present in the array.

Auxiliary Space : O(1)  

Please Login to comment...

Similar reads.

  • python searching-exercises

advertisewithusBannerImg

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Modified crayfish optimization algorithm for solving multiple engineering application problems

  • Open access
  • Published: 24 April 2024
  • Volume 57 , article number  127 , ( 2024 )

Cite this article

You have full access to this open access article

problem solving using binary search algorithm

  • Heming Jia 1 ,
  • Xuelian Zhou 1 ,
  • Jinrui Zhang 1 ,
  • Laith Abualigah 2 , 3 ,
  • Ali Riza Yildiz 4 &
  • Abdelazim G. Hussien 5  

Crayfish Optimization Algorithm (COA) is innovative and easy to implement, but the crayfish search efficiency decreases in the later stage of the algorithm, and the algorithm is easy to fall into local optimum. To solve these problems, this paper proposes an modified crayfish optimization algorithm (MCOA). Based on the survival habits of crayfish, MCOA proposes an environmental renewal mechanism that uses water quality factors to guide crayfish to seek a better environment. In addition, integrating a learning strategy based on ghost antagonism into MCOA enhances its ability to evade local optimality. To evaluate the performance of MCOA, tests were performed using the IEEE CEC2020 benchmark function and experiments were conducted using four constraint engineering problems and feature selection problems. For constrained engineering problems, MCOA is improved by 11.16%, 1.46%, 0.08% and 0.24%, respectively, compared with COA. For feature selection problems, the average fitness value and accuracy are improved by 55.23% and 10.85%, respectively. MCOA shows better optimization performance in solving complex spatial and practical application problems. The combination of the environment updating mechanism and the learning strategy based on ghost antagonism significantly improves the performance of MCOA. This discovery has important implications for the development of the field of optimization.

Graphical Abstract

problem solving using binary search algorithm

Similar content being viewed by others

problem solving using binary search algorithm

Crayfish optimization algorithm

problem solving using binary search algorithm

Enhanced Tunicate Swarm Algorithm for Solving Large-Scale Nonlinear Optimization Problems

problem solving using binary search algorithm

Hybrid beluga whale optimization algorithm with multi-strategy for functions and engineering optimization problems

Avoid common mistakes on your manuscript.

1 Introduction

For a considerable period, engineering application problems have been widely discussed by people. At present, improving the modern scientific level of engineering construction has become the goal of human continuous struggle, including constrained engineering design problems (Zhang et al. 2022a ; Mortazavi 2019 ) affected by a series of external factors and feature selection problems (Kira and Rendell 1992 ), and so on. Constrained engineering design problems refers to the problem of achieving optimization objectives and reducing calculation costs under many external constraints, which is widely used in mechanical engineering (Abualigah et al. 2022 ), electrical engineering (Razmjooy et al. 2021 ), civil engineering (Kaveh 2017 ), chemical engineering (Talatahari et al. 2021 ) and other engineering fields, such as workshop scheduling (Meloni et al. 2004 ), wind power generation (Lu et al. 2021 ), and UAV path planning (Belge et al. 2022 ), parameter extraction of photovoltaic models(Zhang et al. 2022b ; Zhao et al. 2022 ), Optimization of seismic foundation isolation system (Kandemir and Mortazavi 2022 ), optimal design of RC support foundation system of industrial buildings (Kamal et al. 2023 ), synchronous optimization of fuel type and external wall insulation performance of intelligent residential buildings (Moloodpoor and Mortazavi 2022 ), economic optimization of double-tube heaters (Moloodpoor et al. 2021 ).

Feature selection is the process of choosing specific subsets of features from a larger set based on defined criteria. In this approach, each original feature within the subset is individually evaluated using an assessment function. The aim is to select pertinent features that carry distinctive characteristics. This selection process reduces the dimensionality of the feature space, enhancing the model's generalization ability and accuracy. The ultimate goal is to create the best possible combination of features for the model. By employing feature selection, the influence of irrelevant factors is minimized. This reduction in irrelevant features not only streamlines the computational complexity but also reduces the time costs associated with processing the data. Through this method, redundant and irrelevant features are systematically removed from the model. This refinement improves the model’s accuracy and results in a higher degree of fit, ensuring that the model aligns more closely with the underlying data patterns.

In practical applications of feature selections, models are primarily refined using two main methods: the filter (Cherrington et al. 2019 ) and wrapper (Jović et al. 2015 ) techniques. The filter method employs a scoring mechanism to assess and rank the model's features. It selects the subset of features with the highest scores, considering it as the optimal feature combination. On the other hand, the wrapper method integrates the selection process directly into the learning algorithm. It embeds the feature subset evaluation within the learning process, assessing the correlation between the chosen features and the model. In recent years, applications inspired by heuristic algorithms can be seen everywhere in our lives and are closely related to the rapid development of today's society. These algorithms play an indispensable role in solving a myriad of complex engineering problems and feature selection challenges. They have proven particularly effective in addressing spatial, dynamic, and random problems, showcasing significant practical impact and tangible outcomes.

With the rapid development of society and science and technology, through continuous exploitation and exploration in the field of science, more and more complex and difficult to describe multi-dimensional engineering problems also appear in our research process. Navigating these complexities demands profound contemplation and exploration. While traditional heuristic algorithms have proven effective in simpler, foundational problems, they fall short when addressing the novel and intricate multi-dimensional challenges posed by our current scientific landscape and societal needs. Thus, researchers have embarked on a journey of continuous contemplation and experimentation. By cross-combining and validating existing heuristic algorithms, they have ingeniously devised a groundbreaking solution: Metaheuristic Algorithms (MAs) (Yang 2011 ). This innovative approach aims to tackle the complexities of our evolving problems, ensuring alignment with the rapid pace of social and technological development. MAs is a heuristic function based algorithm. It works by evaluating the current state of the problem and possible solutions to guide the algorithm in making choices in the search space. MAs improves the efficiency and accuracy of the problem solving process by combining multiple heuristic functions and updating the search direction at each step based on their weights. The diversity of MAs makes it a universal problem solver, adapting to the unique challenges presented by different problem domains. Essentially represents a powerful paradigm shift in computational problem solving, providing a powerful approach to address the complexity of modern engineering and scientific challenges. Compared with traditional algorithms, MAs has made great progress in finding optimal solutions, jumping out of local optima, and overcoming convergence difficulties in the later stage of solution through the synergy of different algorithms. These enhancements mark a significant progress, which not only demonstrates the adaptability of the scientific method, but also emphasizes the importance of continuous research and cooperation. It also has the potential to radically solve problems in domains of complex engineering challenges, enabling researchers to navigate complex problem landscapes with greater accuracy and efficiency.

Research shows that MAs are broadly classified into four different research directions: swarm-based, natural evolution-based, human-based, and physics-based. These categories include a wide range of innovative problem-solving approaches, each drawing inspiration from a different aspect of nature, human behavior, or physical principles. Researchers exploration these different pathways to solve complex challenges and optimize the solutions efficiently. First of all, the swarm-based optimization algorithm is the optimization algorithm that uses the wisdom of population survival to solve the problem. For example, Particle Swarm Optimization Algorithm (PSO) (Wang et al. 2018a ) is an optimization algorithm based on the group behavior of birds. PSO has a fast search speed and is only used for real-valued processing. However, it is not good at handling discrete optimization problems and has fallen into local optimization. Artificial Bee Colony Optimization Algorithm (ABC) (Jacob and Darney 2021 ) realizes the sharing and communication of information among individuals when bees collect honey according to their respective division of labor. In the Salp Swarm Algorithm (SSA) (Mirjalili et al. 2017 ), individual sea squirts are connected end to end and move and prey in a chain, and follow the leader with followers according to a strict “hierarchical” system. Ant Colony Optimization Algorithm (ACO) (Dorigo et al. 2006 ), ant foraging relies on the accumulation of pheromone on the path, and spontaneously finds the optimal path in an organized manner.

Secondly, a natural evolutionary algorithm inspired by the law of group survival of the fittest, an optimization algorithm that finds the best solution by preserving the characteristics of easy survival and strong individuals, such as: Genetic Programming Algorithm (GP) (Espejo et al. 2009 ), because biological survival and reproduction have certain natural laws, according to the structure of the tree to deduce certain laws of biological genetic and evolutionary process. Evolutionary Strategy Algorithm (ES) (Beyer and Schwefel 2002 ), the ability of a species to evolve itself to adapt to the environment, and produce similar but different offspring after mutation and recombination from the parent. Differential Evolution (DE) (Storn and Price 1997 ) eliminates the poor individuals and retains the good ones in the process of evolution, so that the good ones are constantly approaching the optimal solution. It has a strong global search ability in the initial iteration, but when there are fewer individuals in the population, individuals are difficult to update, and it is easy to fall into the local optimal. The Biogeography-based Optimization Algorithm (BBO) (Simon 2008 ), influenced by biogeography, filters out the global optimal value through the iteration of the migration and mutation of species information.

Then, Human-based optimization algorithms are optimization algorithms that take advantage of the diverse and complex human social relationships and activities in a specific environment to solve problems, such as: The teaching–learning-based Optimization (TLBO) (Rao and Rao 2016 ) obtained the optimal solution by simulating the Teaching relationship between students and teachers. It simplifies the information sharing mechanism within each round, and all evolved individuals can converge to the global optimal solution faster, but the algorithm often loses its advantage when solving some optimization problems far from the origin. Coronavirus Mask Protection Algorithm (CMPA) (Yuan et al. 2023 ), which is mainly inspired by the self-protection process of human against coronavirus, establishes a mathematical model of self-protection behavior and solves the optimization problem. Cultural Evolution Algorithm (CEA) (Kuo and Lin 2013 ), using the cultural model of system thinking framework for exploitation to achieve the purpose of cultural transformation, get the optimal solution. Volleyball Premier League Algorithm (VPL) (Moghdani and Salimifard 2018 ) simulates the process of training, competition and interaction of each team in the volleyball game to solve the global optimization problem.

Finally, Physics-based optimization algorithm is an optimization algorithm that uses the basic principles of physics to simulate the physical characteristics of particles in space to solve problems. For example, Snow Ablation Algorithm (SAO) (Deng and Liu 2023 ), inspired by the physical reaction of snow in nature, realizes the transformation among snow, water and steam by simulating the sublation and ablation of snow. RIME Algorithm (RIME) (Su et al. 2023 ) is a exploration and exploitation of mathematical model balance algorithm based on the growth process of soft rime and hard rime in nature. Central Force Optimization Algorithm (CFO) (Formato 2007 ), aiming at the problem of complex calculation of the initial detector, a mathematical model of uniform design is proposed to reduce the calculation time. Sine and cosine algorithm (SCA) (Mirjalili 2016 ) establishes mathematical models and seeks optimal solutions based on the volatility and periodicity characteristics of sine and cosine functions. Compared with the candidate solution set of a certain scale, the algorithm has a strong search ability and the ability to jump out of the local optimal, but the results of some test functions fluctuate around the optimal solution, and there is a certain precocious situation, and the convergence needs to be improved.

While the original algorithm is proposed, many improved MAs algorithms are also proposed to further improve the optimization performance of the algorithm in practical application problems, such as: Yujun-Zhang et al. combined the arithmetic optimization algorithm (AOA) with the Aquila Optimizer(AO) algorithm to propose a new meta-heuristic algorithm (AOAAO) (Zhang et al. 2022c ). CSCAHHO algorithm (Zhang et al. 2022d ) is a new algorithm obtained by chaotic mixing of sine and cosine algorithm (SCA) and Harris Hqwk optimization algorithm (HHO). Based on LMRAOA algorithm proposed to solve numerical and engineering problems (Zhang et al. 2022e ). Yunpeng Ma et al. proposed an improved teaching-based optimization algorithm to artificially reduce NOx emission concentration in circulating fluidized bed boilers (Ma et al. 2021 ). The improved algorithm SOS(MSOS) (Kumar et al. 2019 ), based on the natural Symbiotic search (SOS) algorithm, improves the search efficiency of the algorithm by introducing adaptive return factors and modified parasitic vectors. Modified beluga whale optimization with multi-strategies for solving engineering problems (MBWO) (Jia et al. 2023a ) by gathering Beluga populations for feeding and finding new habitats during long-distance migration. Betul Sultan Yh-ld-z et al. proposed a novel hybrid optimizer named AO-NM, which aims to optimize engineering design and manufacturing problems (Yıldız et al. 2023 ).

The Crayfish Optimization Algorithm (COA) (Jia et al. 2023b ) is a novel metaheuristic algorithm rooted in the concept of population survival wisdom, introduced by Heming Jia et al. in 2023. Drawing inspiration from crayfish behavior, including heat avoidance, competition for caves, and foraging, COA employs a dual-stage strategy. During the exploration stage, it replicates crayfish searching for caves in space for shelter, while the exploitation stage mimics their competition for caves and search for food. Crayfish, naturally averse to dry heat, thrive in freshwater habitats. To simulate their behavior and address challenges related to high temperatures and food scarcity, COA incorporates temperature variations into its simulation. By replicating crayfish habits, the algorithm dynamically adapts to environmental factors, ensuring robust problem-solving capabilities. Based on temperature fluctuations, crayfish autonomously select activities such as seeking shelter, competing for caves, and foraging. When the temperature exceeds 30°C, crayfish instinctively seek refuge in cool, damp caves to escape the heat. If another crayfish is already present in the cave, a competition ensues for occupancy. Conversely, when the temperature drops below 30°C, crayfish enter the foraging stage. During this phase, they make decisions about food consumption based on the size of the available food items. COA achieves algorithmic transformation between exploration and exploitation stages by leveraging temperature variations, aiming to balance the exploration and exploitation capabilities of the algorithm. However, COA solely emulates the impact of temperature on crayfish behavior, overlooking other significant crayfish habits, leading to inherent limitations. In the latter stages of global search, crayfish might cluster around local optimum positions, restricting movement. This hampers the crayfish's search behavior, slowing down convergence speed, and increasing the risk of falling into local optima, thereby making it challenging to find the optimal solution.

In response to the aforementioned challenges, this paper proposes a Modified Crayfish Optimization Algorithm (MCOA). MCOA introduces an environmental update mechanism inspired by crayfish's preference for living in fresh flowing water. MCOA incorporates crayfish's innate perception abilities to assess the quality of the surrounding aquatic environment, determining whether the current habitat is suitable for survival. The simulation of crayfish crawling upstream to find a more suitable aquatic environment is achieved by utilizing adaptive flow factors and leveraging the crayfish's second, third foot perceptions to determine the direction of water flow.This method partially replicates the survival and reproduction behavior of crayfish, ensuring the continual movement of the population. It heightens the randomness within the group, widens the search scope for crayfish, enhances the algorithm's exploration efficiency, and effectively strengthens the algorithm’s global optimization capabilities. Additionally, the ghost opposition-based learning strategy (Jia et al. 2023c ) is implemented to introduce random population initialization when the algorithm becomes trapped in local optima. This enhancement significantly improves the algorithm's capability to escape local optima, promoting better exploration of the solution space. After the careful integration of the aforementioned two strategies, the search efficiency and predation speed of the crayfish algorithm experience a substantial improvement. Moreover, the algorithm's convergence rate and global optimization ability are significantly enhanced, leading to more effective and efficient problem-solving capabilities.

In the experimental section, we conducted a comprehensive comparison between MCOA and nine other metaheuristic algorithms. We utilized the IEEE CEC2020 benchmark function to evaluate the performance of the algorithm. The evaluation involved statistical methods such as the Wilcoxon rank sum test and Friedman test to rank the averages, validating the efficiency of the MCOA algorithm and the effectiveness of the proposed improvements. Furthermore, MCOA was applied to address four constrained engineering design problems as well as the high-dimensional feature selection problem using the wrapper method. These practical applications demonstrated the practicality and effectiveness of MCOA in solving real-world engineering problems.

The main contributions of this paper are as follows:

In the environmental renewal mechanism, the water quality factor and roulette wheel selection method are introduced to simulate the process of crayfish searching for a more suitable water environment for survival.

The introduction of the ghost opposition-based learning strategy enhances the randomness of crayfish update locations, effectively preventing the algorithm from getting trapped in local optima, and improving the overall global optimization performance of the algorithm.

The fixed value of food intake is adaptively adjusted based on the number of evaluations, enhancing the algorithm's capacity to escape local optima. This adaptive change ensures a more dynamic exploration of the solution space, improving the algorithm's overall optimization effectiveness.

The MCOA’s performance is compared with nine metaheuristics, including COA, using the IEEE CEC2020 benchmark function. The comparison employs the Wilcoxon rank sum test and Friedman test to rank the averages, providing evidence for the efficiency of MCOA and the effectiveness of the proposed improvements.

The application of MCOA to address four constrained engineering design problems and the high-dimensional feature selection problem using the wrapper method demonstrates the practicality and effectiveness of MCOA in real-world applications.

The main structure of this paper is as follows, the first part of the paper serves as a brief introduction to the entire document, providing an overview of the topics and themes that will be covered. In the second part, the paper provides a comprehensive summary of the Crayfish Optimization Algorithm (COA). In the third part, a modified crawfish optimization algorithm (MCOA) is proposed. By adding environment updating mechanism and ghost opposition-based learning strategy, MCOA can enhance the global search ability and convergence speed to some extent. Section four shows the experimental results and analysis of MCOA in IEEE CEC2020 benchmark functions. The fifth part applies MCOA to four kinds of constrained engineering design problems. In Section six, MCOA is applied to the high-dimensional feature selection problem of wrapper methods to demonstrate the effectiveness of MCOA in practical application problems. Finally, Section seven concludes the paper.

2 Crayfish optimization algorithm (COA)

Crayfish is a kind of crustaceans living in fresh water, its scientific name is crayfish, also called red crayfish or freshwater crayfish, because of its food, fast growth rate, rapid migration, strong adaptability and the formation of absolute advantages in the ecological environment. Changes in temperature often cause changes in crayfish behavior. When the temperature is too high, crayfish choose to enter the cave to avoid the damage of high temperature, and when the temperature is suitable, they will choose to climb out of the cave to forage. According to the living habits of crayfish, it is proposed that the three stages of summer, competition for caves and going out to forage correspond to the three living habits of crayfish, respectively.

Crayfish belong to ectotherms and are affected by temperature to produce behavioral differences, which range from 20 °C to 35 °C. The temperature is calculated as follows:

where temp represents the temperature of the crayfish's environment.

2.1 Initializing the population

In the d -dimensional optimization problem of COA, each crayfish is a 1 ×  d matrix representing the solution of the problem. In a set of variables ( X 1 , X 2 , X 3 …… X d ), the position ( X ) of each crayfish is between the upper boundary ( ub ) and lower boundary ( lb ) of the search space. In each evaluation of the algorithm, an optimal solution is calculated, and the solutions calculated in each evaluation are compared, and the optimal solution is found and stored as the optimal solution of the whole problem. The position to initialize the crayfish population is calculated using the following formula.

where X i,j denotes the position of the i-th crayfish in the j-th dimension, ub j denotes the upper bound of the j-th dimension, lb j denotes the lower bound of the j-th dimension, and rand is a random number from 0 to 1.

2.2 Summer escape stage (exploration stage)

In this paper, the temperature of 30 °C is assumed to be the dividing line to judge whether the current living environment is in a high temperature environment. When the temperature is greater than 30 ℃ and it is in the summer, in order to avoid the harm caused by the high temperature environment, crayfish will look for a cool and moist cave and enter the summer to avoid the influence of high temperature. The caverns are calculated as follows.

where X G represents the optimal position obtained so far for this evaluation number, and X L represents the optimal position of the current population.

The behavior of crayfish competing for the cave is a random event. To simulate the random event of crayfish competing for the cave, a random number rand is defined, when rand < 0.5 means that there are no other crayfish currently competing for the cave, and the crayfish will go straight into the cave for the summer. At this point, the crayfish position update calculation formula is as follows.

Here, X new is the next generation position after location update, and C 2 is a decreasing curve. C 2 is calculated as follows.

Here, FEs represents the number of evaluations and MaxFEs represents the maximum number of evaluations.

2.3 Competition stage (exploitation stage)

When the temperature is greater than 30 °C and rand ≥ 0.5, it indicates that the crayfish have other crayfish competing with them for the cave when they search for the cave for summer. At this point, the two crayfish will struggle against the cave, and crayfish X i adjusts its position according to the position of the other crayfish X z . The adjustment position is calculated as follows.

Here, z represents the random individual of the crayfish, and the random individual calculation formula is as follows.

where, N is the population size.

2.4 Foraging stage (exploitation stage)

The foraging behavior of crayfish is affected by temperature, and temperature less than or equal to 30 ℃ is an important condition for crayfish to climb out of the cave to find food. When the temperature is less than or equal to 30 °C, the crayfish will drill out of the cave and judge the location of the food according to the optimal location obtained in this evaluation, so as to find the food to complete the foraging. The position of the food is calculated as follows.

The amount of food crayfish eat depends on the temperature. When the temperature is between 20 °C and 30°C, crayfish have strong foraging behavior, and the most food is found and the maximum food intake is also obtained at 25 °C. Thus, the food intake pattern of crayfish resembles a normal distribution. Food intake was calculated as follows.

Here, µ is the most suitable temperature for crayfish feeding, and σ and C 1 are the parameters used to control the variation of crayfish intake at different temperatures.

The food crayfish get depends not only on the amount of food they eat, but also on the size of the food. If the food is too large, the crayfish can't eat the food directly. They need to tear it up with their claws before eating the food. The size of the food is calculated as follows.

Here, C 3 is the food factor, which represents the largest food, and its value is 3, fitness i represents the fitness value of the i-th crayfish, and fitness food represents the fitness value of the location of the food.

Crayfish use the value of the maximum food Q to judge the size of the food obtained and thus decide the feeding method. When Q  > ( C 3  + 1)/2, it means that the food is too large for the crayfish to eat directly, and it needs to tear the food with its claws and eat alternately with the second and third legs. The formula for shredding food is as follows.

After the food is shredded into a size that is easy to eat, the second and third claws are used to pick up the food and put it into the mouth alternately. In order to simulate the process of bipedal eating, the mathematical models of sine function and cosine function are used to simulate the crayfish eating alternately. The formula for crayfish alternating feeding is as follows.

When Q  ≤ ( C 3  + 1)/2, it indicates that the food size is suitable for the crayfish to eat directly at this time, and the crayfish will directly move towards the food location and eat directly. The formula for direct crayfish feeding is as follows.

2.5 Pseudo-code for COA

figure b

Crayfish optimization algorithm pseudo-code

3 Modified crayfish optimization algorithm (MCOA)

Based on crayfish optimization algorithm, we propose a high-dimensional feature selection problem solving algorithm (MCOA) based on improved crayfish optimization algorithm. In MCOA, we know that the quality of the aquatic environment has a great impact on the survival of crayfish, according to the living habits of crayfish, which mostly feed on plants and like fresh water. Oxygen is an indispensable energy for all plants and animals to maintain life, the higher the content of dissolved oxygen in the water body, the more vigorous the feeding of crayfish, the faster the growth, the less disease, and the faster the water flow in the place of better oxygen permeability, more aquatic plants, suitable for survival, so crayfish has a strong hydrotaxis. When crayfish perceive that the current environment is too dry and hot or lack of food, they crawl backward according to their second, third and foot perception (r) to judge the direction of water flow, and find an aquatic environment with sufficient oxygen and food to sustain life. Good aquatic environment has sufficient oxygen and abundant aquatic plants, to a certain extent, to ensure the survival and reproduction of crayfish.

In addition, we introduce ghost opposition-based learning to help MCOA escape the local optimal trap. The ghost opposition-based learning strategy combines the candidate individual, the current individual and the optimal individual to randomly generate a new candidate position to replace the previous poor candidate position, and then takes the best point or the candidate solution as the central point, and then carries out more specific and extensive exploration of other positions. Traditional opposition-based learning (Mahdavi et al. 2018 ) is based on the central point and carries out opposition-based learning in a fixed format. Most of the points gather near the central point and their positions will not exceed the distance between the current point and the central point, and most solutions will be close to the optimal individual. However, if the optimal individual is not near the current exploration point, the algorithm will fall into local optimal and it is difficult to find the optimal solution. Compared with traditional opposition-based learning, ghost opposition-based learning is a opposition-based learning solution that can be dynamically changed by adjusting the size of parameter k, thereby expanding the algorithm's exploration range of space, effectively solving the problem that the optimal solution is not within the search range based on the center point, and making the algorithm easy to jump out of the local optimal.

According to the life habits of crayfish, this paper proposes a Modified Crayfish Optimization Algorithm (MCOA), which uses environment update mechanism and ghost opposition-based learning strategy to improve COA, and shows the implementation steps, pseudo-code and flow chart of MCOA algorithm as follows.

3.1 Environment update mechanism

In the environmental renewal mechanism, a water quality factor V is introduced to represent the quality of the aquatic environment at the current location. In order to simplify the design and computational complexity of the system, the water quality factor V of the MCOA is represented by a hierarchical discretization, and its value range is set to 0 to 5. Crayfish perceive the quality of the current aquatic environment through the perception ( r ) of the second and third legs, judge whether the current living environment can continue to survive through the perception, and independently choose whether to update the current location. The location update is calculated as follows.

Among them, each crayfish has a certain difference in its own perception of water environment r , X 2 is a random position between the candidate optimal position and the current position, which is calculated by Eq. ( 15 ), X 1 is a random position in the population, and B is an adaptive water flow factor, which is calculated by Eq. ( 16 ).

Among them, the sensing force r of the crayfish’s second and third legs is a random number [0,1]. c is a constant that represents the water flow velocity factor with a value of 2. When V  ≤ 3, it indicates that the crayfish perceives the quality of the current living environment to be good and is suitable for continued survival. When V > 3, it indicates that the crayfish perceives that the current living environment quality is poor, and it needs to crawl in the opposite direction according to the direction of water flow that crayfish perceives, so as to find an aquatic environment with sufficient oxygen and abundant food Fig.  1 .

figure 1

Classification of MAs

In the environmental updating mechanism, in order to describe the behavior of crayfish upstream in more detail, the perception area of crayfish itself is abstractly defined as a circle in MCOA, and crayfish is in the center of the circle. In each evaluation calculation, a random Angle θ is first calculated by the roulette wheel selection algorithm to determine the moving direction of the crayfish in the circular area, and then the moving path of the crayfish is determined according to the current moving direction. In the whole circle, random angles can be chosen from 0 to 360 degrees, from which the value of θ can be determined to be of magnitude [− 1,1]. The difference of random Angle indicates that each crayfish moves its position in a random direction, which broadens the search range of crayfish, enhances the randomness of position and the ability to escape from local optimum, and avoids local convergence Fig.  2 .

figure 2

Schematic diagram of the environment update mechanism

3.2 Ghost opposition-based learning strategy

The ghost opposition-based learning strategy takes a two-dimensional space as an example. It is assumed that there is a two-dimensional space, as shown in Fig.  3 . On the X-axis, [ ub , lb ] represents the search range of the solution, and the ghost generation method is shown in Fig.  3 . Assuming that the position of a new candidate solution is Xnew and the height of the solution is h1 i , the position of the best solution on the X-axis is the projected position of the candidate solution, and the position and height are XG , h2 i , respectively. In addition, on the X-axis there is a projection position X i of the candidate solution with a height of h3 i. Thus, the position of the ghost is obtained. The projection position of the ghost on the X-axis is x i by vector calculation, and its height is h i . The ghost position is calculated using the following formula.

figure 3

Schematic diagram of ghost opposition-based learning strategy

In Fig.  3 , the Y-axis represents the convex lens. Suppose there is a ghost position P i , where x i is its projection on the X-axis and h i is its height. P* i is the real image obtained by convex lens imaging. P* i is projected on the X-axis as x* i and has height h* i . Therefore, the opposite individual x* i of individual x i can be obtained. x* i is the corresponding point corresponding to the ghost individual x i obtained from O as the base point. According to the lens imaging principle, we can obtain Eq. ( 18 ), and the calculation formula is as follows.

The strategy formula of ghost opposition-based learning is evolved from Eq. ( 18 ). The strategy formula of ghost opposition-based learning is calculated as follows.

3.3 Implementation of MCOA algorithm

3.3.1 initialization phase.

Initialize the population size N , the population dimension d , and the number of evaluations FEs . The initialized population is shown in Eq. ( 2 ).

3.3.2 Environment update mechanism

Crayfish judge the quality of the current aquatic environment according to the water quality factor V , and speculate whether the current aquatic environment can continue to survive. When V  > 3 indicates that the crawfish perceives the quality of the current aquatic environment as poor and is not suitable for survival. According to the sensory information of the second and third legs and the adaptive flow factor, the crawfish judges the direction of the current flow, and then moves upstream to find a better aquatic environment to update the current position. The position update formula is shown in Eq. ( 14 ). When V  < 3, it means that the crayfish has a good perception of the current living environment and is suitable for survival, and does not need to update its position.

3.3.3 Exploration phase

When the temperature is greater than 30 ℃ and V  > 3, it indicates that crayfish perceive the current aquatic environment quality is poor, and the cave is dry and without moisture, which cannot achieve the effect of summer vacation. It is necessary to first update the position by crawling in the reverse direction according to the flow direction, and find a cool and moist cave in a better quality aquatic environment for summer.

3.3.4 Exploitation stage

When the temperature is less than 30 ℃ and V  > 3, it indicates that crayfish perceive the current aquatic environment is poor, and there is not enough food to maintain the survival of crayfish. It is necessary to escape from the current food shortage living environment by crawling in the reverse direction according to the current direction, and find a better aquatic environment to maintain the survival and reproduction of crayfish.

3.3.5 Ghost opposition-based learning strategy

Through the combination of the candidate individual, the current individual and the optimal individual, a candidate solution is randomly generated and compared with the current solution, the better individual solution is retained, the opposite individual is obtained, and the location of the ghost is obtained. The combination of multiple positions effectively prevents the algorithm from falling into local optimum, and the specific implementation formula is shown in Eq. ( 19 ).

3.3.6 Update the location

The position of the update is determined by comparing the fitness values. If the fitness of the current individual update is better, the current individual replaces the original individual. If the fitness of the original individual is better, the original individual is retained to exist as the optimal solution.

The pseudocode for MCOA is as follows (Algorithm 2).

figure c

Modified Crayfish optimization algorithm pseudo-code

The flow chart of the MCOA algorithm is as follows.

3.4 Computational complexity analysis

The complexity analysis of algorithms is an essential step to evaluate the performance of algorithms. In the experiment of complexity analysis of the algorithm, we choose the IEEE CEC2020 Special Session and Competition as the complexity evaluation standard of the single objective optimization algorithm. The complexity of MCOA algorithm mainly depends on several important parameters, such as the population size ( N  = 30), the number of dimensions of the problem ( d  = 10), the maximum number of evaluations of the algorithm ( MaxFEs  = 100,000) and the solution function ( C ). Firstly, the running time of the test program is calculated and the running time ( T 0 ) of the test program is recorded, and the test program is shown in Algorithm 3. Secondly, under the same dimension of calculating the running time of the test program, the 10 test functions in the IEEE CEC2020 function set were evaluated 100,000 times, and their running time ( T 1 ) was recorded. Finally, the running time of 100,000 evaluations of 10 test functions performed by MCOA for 5 times under the same dimension was recorded, and the average value was taken as the running time of the algorithm ( T 2 ). Therefore, the formula for calculating the time complexity of MCOA algorithm is given in Eq. ( 21 ).

figure d

IEEE CEC2020 complexity analysis test program

The experimental data table of algorithm complexity analysis is shown in Table  1 . In the complexity analysis of the algorithm, we use the method of comparing MCOA algorithm with other seven metaheuristic algorithms to illustrate the complexity of MCOA. In Table  1 , we can see that the complexity of MCOA is much lower than other comparison algorithms such as ROA, STOA, and AOA. However, compared with COA, the complexity of MCOA is slightly higher than that of COA because it takes a certain amount of time to update the location through the environment update mechanism and ghost opposition-based learning strategy. Although the improved strategy of MCOA increases the computation time to a certain extent, the optimization performance of MOCA has been significantly improved through a variety of experiments in section four of this paper, which proves the good effect of the improved strategy.

4 Experimental results and discussion

The experiments are carried out on a 2.50 GHz 11th Gen Intel(R) Core(TM) i7-11,700 CPU with 16 GB memory and 64-bit Windows11 operating system using Matlab R2021a. In order to verify the performance of MCOA algorithm, MCOA is compared with nine metaheuristic algorithms in this subsection. In the experiments, we used the IEEE CEC2020 test function to evaluate the optimization performance of the MCOA algorithm Fig.  4 .

figure 4

Flow chart of the MCOA algorithm

4.1 Experiments with IEEE CEC2020 test functions

In this subsection, using the Crayfish Optimization Algorithm (COA), Remora Optimization Algorithm (ROA) (Jia et al. 2021 ), Sooty Tern Optimization Algorithm (STOA) (Dhiman and Kaur 2019 ), Arithmetic Optimization Algorithm (AOA) (Abualigah et al. 2021 ), Harris Hawk Optimization Algorithm (HHO) (Heidari et al. 2019 ), Prairie Dog Optimization Algorithm (PDO) (Ezugwu et al. 2022 ), Genetic Algorithm (GA) (Mirjalili and Mirjalili 2019 ),Modified Sand Cat Swarm Optimization Algorithm (MSCSO) (Wu et al. 2022 ) and a competition algorithm LSHADE (Piotrowski 2018 ) were compared to verify the optimization effect of MCOA. The parameter Settings of each algorithm are shown in Table  2 .

In order to test the performance of MCOA, this paper selects 10 benchmark test functions of IEEE CEC2020 for simulation experiments. Where F1 is a unimodal function, F2–F3 is a multimodal function, F4 is a non-peak function, F5–F7 is a hybrid function, and F8-F10 is a composite function. The parameters of this experiment are uniformly set as follows: the maximum number of evaluation MaxFEs is 100,000, the population size N is 30, and the dimension size d is 10. The MCOA algorithm and the other nine algorithms are run independently for 30 times, and the average fitness value, standard deviation of fitness value and Friedman ranking calculation of each algorithm are obtained. The specific function Settings of the IEEE CEC2020 benchmark functions are shown in Table  3 .

4.1.1 Results statistics and convergence curve analysis of IEEE CEC2020 benchmark functions

In order to more clearly and intuitively compare the ability of MCOA and various algorithms to find individual optimal solutions, the average fitness value, standard deviation of fitness value and Friedman ranking obtained by running MCOA and other comparison algorithms independently for 30 times are presented in the form of tables and images. The data and images are shown in Table  4 and Fig.  5 respectively.

figure 5

Convergence curve of MCOA algorithm in IEEE CEC2020

In Table  4 , mean represents the average fitness value, std represents the standard deviation of fitness value, rank represents the Friedman ranking, Friedman average rank represents the average ranking of the algorithm among all functions, and Friedman rank represents the final ranking of this algorithm. Compared with other algorithms, MCOA achieved the best results in average fitness value, standard deviation of fitness value and Friedman ranking. In unimodal function F1, although MCOA algorithm is slightly worse than LSHADE algorithm, MCOA is superior to other algorithms in mean fitness value, standard deviation of fitness value, Friedman ranking and other aspects. In the multimodal functions F2 and F3, although the average fitness value of MCOA is slightly worse, it also achieves a good result of ranking second. The standard deviation of fitness value in F3 is better than other comparison algorithms in terms of stability. In the peakless function F4, except GA and LSHADE algorithm, other algorithms can find the optimal individual solution stably. In the mixed functions F5, F6, and F7, although the mean fitness value of LSHADE is better than that of MCOA, the standard deviation of the fitness value of MCOA is better than that of the other algorithms compared. Among the composite functions of F8, F9 and F10, the standard deviation of MCOA's fitness value at F8 is slightly worse than that of LSHADE, but the average fitness value and standard deviation of fitness value are the best in other composite functions, and it has achieved the first place in all composite functions. Finally, from the perspective of Friedman average rank, MCOA has a strong comprehensive performance and still ranks first. Through the analysis of the data in Table  4 , it can be seen that MCOA ranks first overall and has good optimization effect, and its optimization performance is better than other 9 comparison algorithms.

Figure  5 shows that in the IEEE CEC2020 benchmark functions, for the unimodal function F1, although LSHADE algorithm has a better optimization effect, compared with similar meta-heuristic algorithms, MCOA has a slower convergence rate in the early stage, but can be separated from local optimal and converge quickly in the middle stage. In the multimodal functions F2 and F3, similar to F1, MCOA converges faster in the middle and late stages, effectively exiting the local optimal. Although the convergence speed is slower than that of LSHADE, the optimal value can still be found. In the peak-free function F4, the optimal value can be found faster by all algorithms except LSHADE, STOA and PDO because the function is easy to implement. In the mixed functions F5, F6 and F7, although the convergence rate of MCOA is slightly slower than that of COA algorithm in the early stage, it can still find better values than the other eight algorithms except LSHADE in the later stage. For the composite functions F8, F9 and F10, MCOA can find the optimal value faster than the other nine algorithms.

Based on the above, although LSHADE has a stronger ability to find the optimal value in a small number of functions, MCOA can still find the optimal value in most functions in the later stage, and compared with the other eight pair algorithms of the same type, MCOA has more obvious enhancement in optimization ability and avoidance of local optimization, and has better application effect.

4.1.2 Analysis of Wilcoxon rank sum test results

In the comparison experiment, the different effects of multiple algorithms solving the same problem are used to judge whether each algorithm has high efficiency and more obvious influence on solving the current problem, such as the convergence speed of the convergence curve, the fitness value of the optimal solution, the ability to jump out of the local optimum, etc. At present, only the average fitness value, the standard deviation of fitness value and the convergence curve can not be used as the basis for judging whether the performance of the algorithm is efficient. Therefore, the data and images presented by each algorithm in solving the current problem are comprehensively analyzed, and the Wilcoxon rank sum test is used to further verify the difference between MCOA and the other nine comparison algorithms. In this experiment, the significance level is defined as 5%. If its calculated value is less than 5%, it proves that there is a significant difference between the two algorithms, and if it is greater than 5%, it proves that there is no significant difference between the two algorithms. Table 5 shows the Wilcoxon rank-sum test results of the MCOA algorithm and the other nine comparison algorithms. Where the symbols “ + ”, “−” and “ = ” table the performance of MCOA better, worse and equal to the comparison algorithms, respectively.

In the calculation of the function F4 without peak, the value of 1 appears in the comparison of various algorithms such as MCOA, COA, ROA, STOA and other algorithms, indicating that in this function, a variety of algorithms have found the optimal value, there is no obvious difference, which can be ignored. However, in most of the remaining functions, the significance level of MCOA compared with the other nine algorithms is less than 5%, which is a significant difference.

From the overall table, the MCOA algorithm also achieves good results in the Wilcoxon rank-sum test of the IEEE CEC2020 benchmark function, and the contrast ratio with other algorithms is less than 5%, which proves that the MCOA algorithm has a significant difference from the other nine algorithms, and MCOA has better optimization performance. According to the comparison results with the original algorithm, it is proved that MCOA algorithm has a good improvement effect.

4.2 Comparison experiment of single strategy

MCOA adopts two strategies, environment update mechanism and ghost opposition-based learning strategy, to improve COA. In order to prove the effectiveness of these two strategies for algorithm performance optimization, a single strategy comparison experiment is added in this section. In the experiment in this section, EUCOA algorithm which only adds environment update mechanism and GOBLCOA algorithm which only adds ghost opposition-based learning strategy are compared with the basic COA algorithm. The experiments are independently run 30 times in IEEE CEC2020 benchmark test function, and the statistical data obtained are shown in Table  6 . In order to make the table easy to view the statistical results, the poor data in the table will be bolded to make the statistical results more clear and intuitive. It can be seen from the table that among the best fitness values, average fitness values and standard deviation of fitness values of the 10 test functions, GOBLCOA and EUCOA account for less bolded data, while most data of the original algorithm COA are bolded in the table, which effectively proves that both the environment update mechanism and the ghost opposition-based learning strategy play a certain role in COA. The comprehensive performance of COA has been significantly improved.

4.3 Parameter sensitivity analysis of water flow velocity factor c

In order to better prove the influence of flow velocity coefficient on MCOA, we choose different flow velocity coefficient c values for comparison experiments. Table 7 shows the statistical results of 30 independent runs of different water flow velocity coefficients in CEC2020. The bold sections represent the best results. As can be seen from the table, the result obtained by c  = 2 is significantly better than the other values. Only in individual test functions are the results slightly worse. In the F1 function, c  = 5 has the best std. In the F5 function, std is best at c  = 6. Among F10 functions, c  = 5 has the best std. Among the other test functions, both the mean fitness value and std at water flow velocity factor c  = 2 are optimal. Through the above analysis, it is proved that the water flow velocity factor c  = 2 has a good optimization effect.

4.4 Experimental summary

In this section, we first test MCOA's optimization performance on the IEEE CEC2020 benchmark function. The improved MCOA is compared with the original algorithm COA and six other meta-heuristic algorithms in the same environment and the experimental analysis is carried out. Secondly, the rank sum test is used to verify whether there are significant differences between MCOA and the other nine comparison algorithms. Finally, three algorithms, EUCOA with environment update mechanism, GOBLCOA with ghost opposition-based learning strategy, COA and MCOA, are tested to improve performance. These three experimental results show that MCOA has a good ability to find optimal solutions and get rid of local optimal solutions.

5 Constrained engineering design problems

With the new development of the era of big data, the solution process becomes complicated and the calculation results become accurate, and more and more people pay close attention to the dynamic development of the feasibility and practicality of the algorithm, so as to ensure that the algorithm has good practical performance on constrained engineering design problems. In order to verify the optimization effect of MCOA in practical applications, four constrained engineering design problems are selected for application testing of MCOA to evaluate the performance of MCOA in solving practical application problems. Every constrained engineering design problems has a minimization objective function (Papaioannou and Koulocheris 2018 ) that is used to calculate the fitness value for a given problem. In addition, each problem contains a varying number of constraints that are taken into account during the calculation of the objective function. If the constraints are not met, the penalty function (Yeniay 2005 ) is used to adjust the fitness value. However, the processing of constraints is not the focus of our research, our focus is on the optimization of parameters in a convex region composed of constraints (Liu and Lu 2014 ). In order to ensure the fairness of the experiment, the parameters of all experiments in this section are set as follows: the maximum evaluation time MaxFEs is 10,000 and the overall scale N is 30. In each experiment, all the algorithms were analyzed 500 times and the optimal results were obtained.

5.1 Multi-disc clutch braking problem

In the field of vehicle engineering, there is a common constrained engineering design problems multi-disc clutch braking problem, and the purpose of our algorithm is to minimize the mass of the multi-disc clutch by optimizing eight constraints and five variables, so as to improve the performance of the multi-disc clutch. Among them, the five variables are: inner diameter r i , outer diameter r o , brake disc thickness t , driving force F , and surface friction coefficient Z . The specific structure of the multi-disc clutch is shown in Fig.  6 .

figure 6

Schematic diagram of the multi-disc clutch braking problem

The mathematical formulation of the multi-disc clutch braking problem is as follows.

Objective function:

Subject to:

Variable range:

Other parameters:

After calculation and experiments, the experimental results of the multi-disc clutch braking problem are made into a table as shown in Table  8 . In Table  8 , MCOA concluded that the inner diameter r i  = 70, the outer diameter r 0  = 90, the thickness of the brake disc t  = 1, the driving force F  = 600, and the surface friction coefficient Z  = 2. At this time, the minimum weight obtained is 0.2352424, it is 11.16% higher than the original algorithm. Compared with MCOA, the other five algorithms in the calculation of this problem show that the optimization effect is far lower than that of MCOA.

5.2 Design problem of welding beam

The welded beam design problem is very common in the field of structural engineering and is constrained not only by four decision variables (welding width h , connecting beam length l , beam height t , and connecting beam thickness b ) but also by seven other different conditions. Therefore, it is challenging to solve this problem. The purpose of the optimization algorithm is to achieve the best structural performance of the welded beam and reduce its weight by optimizing the small problems such as the shape, size and layout of the weld under many constraints. The specific structure of the welded beam is shown in Fig.  7 .

figure 7

Schematic diagram of the welded beam design problem

The mathematical formulation of the welded beam design problem is as follows.

Boundaries:

The experimental results of the welding beam design problem are shown in Table  9 . In the table, the welding width obtained by the MCOA algorithm h  = 0.203034,the length of the connecting beam is l  = 3.310032, the height of the beam is t  = 9.084002, and the thickness of the connecting beam is b  = 0.20578751. At this time, the minimum weight is 1.707524, it is 1.46% higher than the original algorithm. In the welding beam design problem, the weight determines the application effect of the algorithm in the practical problem. The weight of MCOA algorithm is smaller than that of other algorithms. Therefore, the practical application effect of MCOA is much greater than that of other algorithms.

5.3 Design problem of reducer

A reducer is a mechanical device used to reduce the speed of rotation and increase the torque. Among them, gears and bearings are an indispensable part of the reducer design, which have a great impact on the transmission efficiency, running stability and service life of the reducer. The weight of the reducer also determines the use of the reducer. Therefore, we will adjust the number of teeth, shape, radius and other parameters of the gear in the reducer to maximize the role of the reducer, reduce the friction between the parts, and extend the service life of the reducer. In this problem, a total of seven variables are constrained, which are the gear width x 1 , the gear modulus x 2 , the gear teeth x 3 , the length of the first axis between bearings x 4 , the length of the second axis between bearings x 5 , the diameter of the first axis x 6 and the diameter of the second axis x 7 . The specific structure of the reducer is shown in Fig.  8 .

figure 8

Schematic diagram of the reducer design problem

The mathematical model of the reducer design problem is as follows.

The experimental results of the reducer design problem are shown in Table  10 . From Table  10 , it is known that the gear width calculated by the MCOA algorithm is x 1  = 3.47635, the gear modulus x 2  = 0.7, the gear teeth x 3  = 17, the length of the first axis between the bearings x 4  = 7.3, the length of the second axis between the bearings × 5 = 7.8, and the length of the first axis between the bearings x 5  = 7.8. The diameter of the first axis is x 6  = 3.348620, the diameter of the second axis is x 7  = 5.2768, and the minimum weight is 2988.27135, it is 0.08% higher than the original algorithm. In this experiment, it can be concluded that MCOA has the smallest data among the minimum weights obtained by MCOA and other comparison algorithms in this problem, which proves that MCOA has the best optimization effect in solving such problems.

5.4 Design problem of three-bar truss

Three-bar truss structure is widely used in bridge, building, and mechanical equipment and other fields. However, the size, shape and connection mode of the rod need to be further explored by human beings. Therefore, A 1  =  x 1 and A 2  =  x 2 determined by the pairwise property of the system need to be considered in solving this problem. In addition to this, there will be constraints on the total support load, material cost, and other conditions such as cross-sectional area. The structural diagram of the three-bar truss is shown in Fig.  9 .

figure 9

Schematic diagram of the three-bar truss design problem

The mathematical formulation of the three-bar truss design problem is as follows.

The experimental results of the three-bar truss design problem are shown in Table  11 , from which it can be concluded that x 1  = 0.7887564and x 2  = 0.4079948of the MCOA algorithm on the three-bar truss design problem. At this time, the minimum weight value is 263.85438633, it is 0.24% higher than the original algorithm. Compared with the minimum weight value of other algorithms, the value of MCOA is the smallest. It is concluded that the MCOA algorithm has a good optimization effect on the three-bar truss design problem.

The experimental results of four constrained engineering design problems show that MCOA has good optimization performance in dealing with problems similar to constrained engineering design problems. In addition, we will also introduce the high-dimensional feature selection problem of the wrapper method, and further judge whether MCOA has good optimization performance and the ability to deal with diversified problems through the classification and processing effect of data.

6 High-dimensional feature selection problem

The objective of feature selection is to eliminate redundant and irrelevant features, thereby obtaining a more accurate model. However, in high-dimensional feature spaces, feature selection encounters challenges such as high computational costs and susceptibility to over-fitting. To tackle these issues, this paper propose novel high-dimensional feature selection methods based on metaheuristic algorithms. These methods aim to enhance the efficiency and effectiveness of feature selection in complex, high-dimensional datasets.

High-dimensional feature selection, as discussed in reference (Ghaemi and Feizi-Derakhshi 2016 ), focuses on processing high-dimensional data to extract relevant features while eliminating redundant and irrelevant ones. This process enhances the model's generalization ability and reduces computational costs. The problem of high-dimensional feature selection is often referred to as sparse modeling, encompassing two primary methods: filter and wrapper. Filter methods, also called classifier-independent methods, can be categorized into univariate and multivariate methods. Univariate methods consider individual features independently, leveraging the correlation and dependence within the data to quickly screen and identify the optimal feature subset. On the other hand, multivariate methods assess relationships between multiple features simultaneously, aiming to comprehensively select the most informative feature combinations. Wrapper methods offer more diverse solutions. This approach treats feature selection as an optimization problem, utilizing specific performance measures of classifiers and objective functions. Wrapper methods continuously explore and evaluate various feature combinations to find the best set of features that maximizes the model’s performance. Unlike filter methods, wrapper methods provide a more customized and problem-specific approach to feature selection.

Filter methods, being relatively single and one-sided, approach the problem of feature selection in a straightforward manner by considering individual features and their relationships within the dataset. However, they might lack the flexibility needed for complex and specific problem scenarios. However, wrapper methods offer tailored and problem-specific solutions. They exhibit strong adaptability, wide applicability, and high relevance to the specific problem at hand. Wrapper methods can be seamlessly integrated into any learning algorithm, allowing for a more customized and targeted approach to feature selection. By treating feature selection as an optimization problem and continuously evaluating different feature combinations, wrapper methods can maximize the effectiveness of the algorithm and optimize its performance to a greater extent compared to filter methods. In summary, wrapper methods provide a more sophisticated and problem-specific approach to feature selection, enabling the algorithm to achieve its maximum potential by selecting the most relevant and informative features for the given task.

6.1 Fitness function

In this subsection, the wrapper method in high-dimensional feature selection is elucidated, employing the classification error rate (CEE) (Wang et al. 2005 ) as an illustrative example. CEE is utilized as the fitness function or objective function to assess the optimization effectiveness of the feature selection algorithm for the problem at hand. Specifically, CEE quantifies the classification error rate when employing the k-nearest-neighbors (KNN) algorithm (Datasets | Feature Selection @ ASU. 2019 ), with the Euclidean distance (ED) (The UCI Machine Learning Repository xxxx) serving as the metric for measuring the distance between the current model being tested and its neighboring models. By using CEE as the fitness function, the wrapper method evaluates different feature subsets based on their performance in the context of the KNN algorithm. This approach enables the algorithm to identify the most relevant features that lead to the lowest classification error rate, thereby optimizing the model's performance. By focusing on the accuracy of classification in a specific algorithmic context, the wrapper method ensures that the selected features are highly tailored to the problem and the chosen learning algorithm. This targeted feature selection process enhances the overall performance and effectiveness of the algorithm in handling high-dimensional data.

where X denotes feat, Y denotes label, both X and Y are specific features in the given data model, and D is the total number of features recorded.

In the experimental setup, each dataset is partitioned into a training set and a test set, with an 80% and 20% ratio. The training set is initially utilized to select the most characteristic features and fine-tune the parameters of the KNN model. Subsequently, the test set is employed to evaluate and calculate the data model and algorithm performance. To address concerns related to fitting ability and overfitting, hierarchical cross-validation with K = 10 was employed in this experiment. In hierarchical cross-validation, the training portion is divided into ten equal-sized subsets. The KNN classifier is trained using 9 out of the 10 folds (K-1 folds) to identify the optimal KNN classifier, while the remaining fold is used for validation purposes. This process is repeated 10 times, ensuring that each subset serves both as a validation set and as part of the training data. This iterative approach is a crucial component of our evaluation methodology, providing a robust assessment of the algorithm's performance. By repeatedly employing replacement validation and folding training, we enhance the reliability and accuracy of our evaluation, enabling a comprehensive analysis of the algorithm's effectiveness across various datasets.

6.2 High-dimensional datasets

In this subsection, the optimization performance of MCOA is assessed using 12 high-dimensional datasets sourced from the Arizona State University (Too et al. 2021 ) and University of California Irvine (UCI) Machine Learning databases (Chandrashekar and Sahin 2014 ). By conducting experiments on these high-dimensional datasets, the results obtained are not only convincing but also pose significant challenges. These datasets authentically capture the intricacies of real-life spatial problems, making the experiments more meaningful and applicable to complex and varied spatial scenarios. For a detailed overview of the 12 high-dimensional datasets, please refer to Table  12 .

6.3 Experimental results and analysis

In order to assess the effectiveness and efficiency of MCOA in feature selection, we conducted comparative tests using MCOA as well as several other algorithms including COA, SSA, PSO, ABC, WSA (Baykasoğlu et al. 2020 ), FPA (Yang 2012 ), and ABO (Qi et al. 2017 ) on 12 datasets. In this section of the experiment, the fitness value of each algorithm was calculated, and the convergence curve, feature selection accuracy (FS Accuracy), and selected feature size for each algorithm were analyzed. Figures 10 , 11 and 12 display the feature selection (FS) convergence curve, FS Accuracy, and selected feature size for the eight algorithms across the 12 datasets. From these figures, it is evident that the optimization ability and prediction accuracy of the MCOA algorithm surpass those of the other seven comparison algorithms. Taking the dataset CLL-SUB-111 as an example in Figs.  11 and 12 , MCOA selected 20 features, while the other seven algorithms selected more than 2000 features. Moreover, the prediction accuracy achieved by MCOA was higher than that of the other seven algorithms. Across all 12 datasets, the comparison figures indicate that the MCOA algorithm consistently outperforms the others. Specifically, the MCOA algorithm tends to select smaller feature subsets, leading to higher prediction accuracy and stronger optimization capabilities. This pattern highlights the superior performance of MCOA in feature selection, demonstrating its effectiveness in optimizing feature subsets for improved prediction accuracy.

figure 10

Convergence curve of FS

figure 11

Comparison plot of verification accuracy of eight algorithms

figure 12

Comparison plots of feature sizes of the eight algorithms

To address the randomness and instability inherent in experiments, a single experiment may not fully demonstrate the effectiveness of algorithm performance. Therefore, we conducted 30 independent experiments using 12 datasets and 8 algorithms. For each algorithm and dataset combination, we calculated the average fitness value, standard deviation of the fitness value, and Friedman rank. Subsequently, the Wilcoxon rank sum test was employed to determine significant differences between the performance of different algorithms across various datasets. Throughout the experiment, a fixed population size of 10 and a maximum of 100 iterations were used. The 12 datasets were utilized to evaluate the 8 algorithms 300 times (tenfold cross-validation × 30 runs). It is essential to note that all algorithms were assessed using the same fitness function derived from the dataset, ensuring a consistent evaluation criterion across the experiments. By conducting multiple independent experiments and statistical analyses, the study aimed to provide a comprehensive and robust assessment of algorithm performance. This approach helps in drawing reliable conclusions regarding the comparative effectiveness of the algorithms under consideration across different datasets, accounting for the inherent variability and randomness in the experimental process.

Table 13 presents the average fitness calculation results from 30 independent experiments for the eight algorithms, it is 55.23% higher than the original algorithm. According to the table, in the Ionosphere dataset, MCOA exhibits the best average fitness, albeit with slightly lower stability compared to ABC. Similarly, in the WarpAR10P dataset, MCOA achieves the best average fitness, with stability slightly lower than COA. After conducting Friedman ranking on the fitness calculation results of the 30 independent experiments, it is concluded that although MCOA shows slightly lower stability in some datasets, it ranks first overall. Among the other seven algorithms, PSO ranks second, ABO ranks third, COA ranks fourth, and ABC, SSA, FPA, and WSA rank fifth to ninth, respectively. These results demonstrate that MCOA exhibits robust optimization performance and high stability in solving high-dimensional feature selection problems. Moreover, MCOA outperforms COA, showcasing its superior improvement in solving these complex problems.

Table 14 presents the accuracy calculation results of the eight algorithms for 30 independent experiments, it is 10.85% higher than the original algorithm. According to the table, the average accuracy of MCOA is the highest across all datasets. Notably, in the Colon dataset, MCOA performs exceptionally well with a perfect average accuracy of 100%. However, in the Ionosphere dataset, MCOA exhibits slightly lower stability compared to ABC, and in the WarpAR10P dataset, it is slightly less stable than COA. Upon conducting Friedman ranking on the average accuracy calculation results of 30 independent experiments, it is evident that MCOA ranks first overall. Among the other seven algorithms, PSO ranks second, ABC ranks third, COA ranks fourth, and ABO, FPA, SSA, and WSA rank fifth to ninth, respectively. These results highlight that MCOA consistently achieves high accuracy and stability in solving high-dimensional feature selection problems. Its superior performance across various datasets underscores its effectiveness and reliability in real-world applications.

Table 15 demonstrates that the MCOA algorithm has shown significant results in the Wilcoxon rank sum test for high-dimensional feature selection fitness. The comparison values with other algorithms are less than 5%, indicating that the MCOA algorithm exhibits significant differences compared to the other seven algorithms. This result serves as evidence that MCOA outperforms the other algorithms, showcasing its superior optimization performance. Additionally, when comparing the results with the original algorithm, it becomes evident that the MCOA algorithm has a substantial and positive impact, demonstrating its effectiveness and improvement over existing methods. These findings underscore the algorithm's potential and its ability to provide substantial enhancements in the field of high-dimensional feature selection.

7 Conclusions and future work

The Crayfish Optimization Algorithm (COA) is grounded in swarm intelligence, drawing inspiration from crayfish behavior to find optimal solutions within a specific range. However, COA’s limitations stem from neglecting crucial survival traits of crayfish, such as crawling against water to discover better aquatic environments. This oversight weakens COA’s search ability, making it susceptible to local optima and hindering its capacity to find optimal solutions. To address these issues, this paper introduces a Modified Crayfish Optimization Algorithm (MCOA). MCOA incorporates an environmental updating mechanism, enabling crayfish to randomly select directions toward better aquatic environments for location updates, enhancing search ability. The addition of the ghost opposition-based learning strategy expands MCOA’s search range and promotes escape from local optima. Experimental validations using IEEE CEC2020 benchmark functions confirm MCOA’s outstanding optimization performance.

Moreover, MCOA’s practical applicability is demonstrated through applications to four constrained engineering problems and high-dimensional feature selection challenges. These experiments underscore MCOA’s efficacy in real-world scenarios, but MCOA can only solve the optimization problem of a single goal. In future studies, efforts will be made to further optimize MCOA and enhance its function. We will exploitation multi-objective version of the algorithm to increase the search ability and convergence of the algorithm through non-dominated sorting, multi-objective selection, crossover and mutation, etc., to solve more complex practical problems. It is extended to wireless sensor network coverage, machine learning, image segmentation and other practical applications.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH (2021) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609. https://doi.org/10.1016/j.cma.2020.113609

Article   MathSciNet   Google Scholar  

Abualigah L, Elaziz MA, Khasawneh AM, Alshinwan M, Ibrahim RA, Al-Qaness MA, Gandomi AH (2022) Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: a comprehensive survey, applications, comparative analysis, and results. Neural Comput Appl. https://doi.org/10.1007/s00521-021-06747-4

Article   Google Scholar  

Ahmed AM, Rashid TA, Saeed SAM (2020) Cat swarm optimization algorithm: a survey and performance evaluation. Comput Intell Neurosci. https://doi.org/10.1155/2020/4854895

Baykasoğlu A, Ozsoydan FB, Senol ME (2020) Weighted superposition attraction algorithm for binary optimization problems. Oper Res Int Journal 20:2555–2581. https://doi.org/10.1007/s12351-018-0427-9

Baykasoglu A, Ozsoydan FB (2015) Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl Soft Comput 36:152–164. https://doi.org/10.1016/j.asoc.2015.06.056

Belge E, Altan A, Hacıoğlu R (2022) Metaheuristic optimization-based path planning and tracking of quadcopter for payload hold-release mission. Electronics 11(8):1208. https://doi.org/10.3390/electronics11081208

Beyer HG, Schwefel HP (2002) Evolution strategies–a comprehensive introduction. Nat Comput 1:3–52. https://doi.org/10.1023/A:1015059928466

Chandrashekar G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28. https://doi.org/10.1016/j.compeleceng.2013.11.024

Cherrington, M., Thabtah, F., Lu, J., & Xu, Q. (2019, April). Feature selection: filter methods performance challenges. In 2019 International Conference on Computer and Information Sciences (ICCIS) (pp. 1–4). IEEE. https://doi.org/10.1109/ICCISci.2019.8716478

Datasets | Feature Selection @ ASU. Accessed from 3 Oct 2019 https://jundongl.github.io/scikit-feature/OLD/home_old.html .

Deng L, Liu S (2023) Snow ablation optimizer: a novel metaheuristic technique for numerical optimization and engineering design. Expert Syst Appl 225:120069. https://doi.org/10.1016/j.eswa.2023.120069

Dhiman G, Kaur A (2019) STOA: a bio-inspired based optimization algorithm for industrial engineering problems. Eng Appl Artif Intell 82:148–174. https://doi.org/10.1016/j.engappai.2019.03.021

Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39. https://doi.org/10.1109/MCI.2006.329691

Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110:151–166. https://doi.org/10.1016/j.compstruc.2012.07.010

Espejo, P. G., Ventura, S., & Herrera, F. (2009) A survey on the application of genetic programming to classification. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 40(2): 121–144. https://doi.org/10.1109/TSMCC.2009.2033566

Ezugwu AE, Agushaka JO, Abualigah L, Mirjalili S, Gandomi AH (2022) Prairie dog optimization algorithm. Neural Comput Appl 34(22):20017–20065. https://doi.org/10.1007/s00521-022-07530-9

Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377. https://doi.org/10.1016/j.eswa.2020.113377

Formato RA (2007) Central force optimization. Prog Electromagn Res 77(1):425–491. https://doi.org/10.2528/PIER07082403

Ghaemi M, Feizi-Derakhshi MR (2016) Feature selection using forest optimization algorithm. Pattern Recogn 60:121–129. https://doi.org/10.1016/j.patcog.2016.05.012

Guedria NB (2016) Improved accelerated PSO algorithm for mechanical engineering optimization problems. Appl Soft Comput 40:455–467. https://doi.org/10.1016/j.asoc.2015.10.048

He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intel 20:89–99. https://doi.org/10.1016/j.engappai.2006.03.003

Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst 97:849–872. https://doi.org/10.1016/j.future.2019.02.028

Jacob DIJ, Darney DPE (2021) Artificial bee colony optimization algorithm for enhancing routing in wireless networks. J Artif Intell Capsule Networks 3(1):62–71. https://doi.org/10.36548/jaicn.2021.1.006

Jia H, Peng X, Lang C (2021) Remora optimization algorithm. Expert Syst Appl 185:115665. https://doi.org/10.1016/j.eswa.2021.115665

Jia H, Wen Q, Wu D, Wang Z, Wang Y, Wen C, Abualigah L (2023a) Modified beluga whale optimization with multi-strategies for solving engineering problems. J Comput Design Eng 10(6):2065–2093. https://doi.org/10.1093/jcde/qwad089

Jia H, Rao H, Wen C, Mirjalili S (2023b) Crayfish optimization algorithm. Artif Intell Rev. https://doi.org/10.1007/s10462-023-10567-4

Jia H, Lu C, Wu D, Wen C, Rao H, Abualigah L (2023c) An improved reptile search algorithm with ghost opposition-based learning for global optimization problems. J Comput Design Eng. https://doi.org/10.1093/jcde/qwad048

Jović, A., Brkić, K., & Bogunović, N. (2015). A review of feature selection methods with applications. In 2015 38th international convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 1200–1205). IEEE. https://doi.org/10.1109/MIPRO.2015.7160458

Kamal M, Mortazavi A, Cakici Z (2023) Optimal design of RC bracket and footing systems of precast industrial buildings using fuzzy differential evolution incorporated virtual mutant. Arabian J Sci Eng. https://doi.org/10.3934/mbe.2022263

Kandemir EC, Mortazavi A (2022) Optimization of seismic base isolation system using a fuzzy reinforced swarm intelligence. Adv Eng Softw 174:103323. https://doi.org/10.1016/j.advengsoft.2022.103323

Kaveh A (2017) Applications of metaheuristic optimization algorithms in civil engineering. Springer International Publishing, Basel, Switzerland. https://doi.org/10.1007/978-3-319-48012-1

Book   Google Scholar  

Kaveh A, Khayatazad M (2012) A new meta-heuristic method: ray optimization. Comput Struct 112:283–294. https://doi.org/10.1016/j.compstruc.2012.09.003

Kaveh A, Mahdavi V (2014) Colliding bodies optimization: a novel meta-heuristic method. Comput Struct 139:18–27. https://doi.org/10.1016/j.compstruc.2014.04.005

Kira, K., & Rendell, L. A. (1992). The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the tenth national conference on Artificial intelligence (pp. 129–134). https://doi.org/10.5555/1867135.1867155

Kiran MS (2015) TSA: tree-seed algorithm for continuous optimization. Expert Syst Appl 42(19):6686–6698. https://doi.org/10.1016/j.eswa.2015.04.055

Kumar S, Tejani GG, Mirjalili S (2019) Modified symbiotic organisms search for structural optimization. Eng with Comput 35(4):1269–1296. https://doi.org/10.1007/s00366-018-0662-y

Kuo HC, Lin CH (2013) Cultural evolution algorithm for global optimizations and its applications. J Appl Res Technol 11(4):510–522

Liu X, Lu P (2014) Solving nonconvex optimal control problems by convex optimization. J Guid Control Dyn 37(3):750–765. https://doi.org/10.2514/1.62110

Lu P, Ye L, Zhao Y, Dai B, Pei M, Tang Y (2021) Review of meta-heuristic algorithms for wind power prediction: methodologies, applications and challenges. Appl Energy 301:117446. https://doi.org/10.1016/j.apenergy.2021.117446

Ma Y, Zhang X, Song J, Chen L (2021) A modified teaching–learning-based optimization algorithm for solving optimization problem. Knowl-Based Syst 212:106599. https://doi.org/10.1016/j.knosys.2023.110554

Mahdavi M, Fesanghary M, Damangir E (2007) An improved harmony search algorithm for solving optimization problems. Appl Math Comput 188(2):1567–1579. https://doi.org/10.1016/j.amc.2006.11.033

Mahdavi S, Rahnamayan S, Deb K (2018) Opposition based learning: a literature review. Swarm Evol Comput 39:1–23. https://doi.org/10.1016/j.swevo.2017.09.010

Meloni C, Pacciarelli D, Pranzo M (2004) A rollout metaheuristic for job shop scheduling problems. Ann Oper Res 131:215–235. https://doi.org/10.1023/B:ANOR.0000039520.24932.4b

Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249. https://doi.org/10.1016/j.knosys.2015.07.006

Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133. https://doi.org/10.1016/j.knosys.2015.12.022

Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27:495–513. https://doi.org/10.1007/s00521-015-1870-7

Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191. https://doi.org/10.1016/j.advengsoft.2017.07.002

Mirjalili, S., & Mirjalili, S. (2019). Genetic algorithm. Evolutionary Algorithms and Neural Networks: Theory and Applications, 43–55. https://doi.org/10.1007/978-3-319-93025-1_4

Moghdani R, Salimifard K (2018) Volleyball premier league algorithm. Appl Soft Comput 64:161–185. https://doi.org/10.1016/j.asoc.2017.11.043

Moloodpoor M, Mortazavi A (2022) Simultaneous optimization of fuel type and exterior walls insulation attributes for residential buildings using a swarm intelligence. Int J Environ Sci Technol 19(4):2809–2822. https://doi.org/10.1007/s13762-021-03323-0

Moloodpoor M, Mortazavi A, Özbalta N (2021) Thermo-economic optimization of double-pipe heat exchanger using a compound swarm intelligence. Heat Transfer Res. https://doi.org/10.1615/HeatTransRes.2021037293

Mortazavi A (2019) Comparative assessment of five metaheuristic methods on distinct problems. Dicle Üniversitesi Mühendislik Fakültesi Mühendislik Dergisi 10(3):879–898. https://doi.org/10.24012/dumf.585790

Papaioannou G, Koulocheris D (2018) An approach for minimizing the number of objective functions in the optimization of vehicle suspension systems. J Sound Vib 435:149–169. https://doi.org/10.1016/j.jsv.2018.08.009

Piotrowski AP (2018) L-SHADE optimization algorithms with population-wide inertia. Inf Sci 468:117–141. https://doi.org/10.1016/j.ins.2018.08.030

Qi X, Zhu Y, Zhang H (2017) A new meta-heuristic butterfly-inspired algorithm. Journal of Computational Science 23:226–239. https://doi.org/10.1016/j.jocs.2017.06.003

Rao H, Jia H, Wu D, Wen C, Li S, Liu Q, Abualigah L (2022) A modified group teaching optimization algorithm for solving constrained engineering optimization problems. Mathematics 10(20):3765. https://doi.org/10.3390/math10203765

Rao, R. V., & Rao, R. V. (2016). Teaching-learning-based optimization algorithm (pp. 9–39). Springer International Publishing. https://doi.org/10.1016/j.cad.2010.12.015

Rashedi E, Nezamabadi-Pour HS (2009) GSA: a gravitational search algorithm. Inform Sci 179:2232–2248. https://doi.org/10.1016/j.ins.2009.03.004

Razmjooy, N., Ashourian, M., & Foroozandeh, Z. (Eds). (2021). Metaheuristics and optimization in computer and electrical engineering. https://doi.org/10.1007/978-3-030-56689-0

Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput 13:2592–2612. https://doi.org/10.1016/j.asoc.2012.11.026

Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47. https://doi.org/10.1016/j.advengsoft.2017.01.004

Sayed GI, Darwish A, Hassanien AE (2018) A new chaotic multi-verse optimization algorithm for solving engineering optimization problems. J Exp Theor Artif Intell 30(2):293–317. https://doi.org/10.1080/0952813X.2018.1430858

Shadravan S, Naji HR, Bardsiri VK (2019) The sailfish optimizer: a novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng Appl Artif Intell 80:20–34. https://doi.org/10.1016/j.engappai.2019.01.001

Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713. https://doi.org/10.1109/TEVC.2008.919004

Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359. https://doi.org/10.1023/A:1008202821328

Su H, Zhao D, Heidari AA, Liu L, Zhang X, Mafarja M, Chen H (2023) RIME: a physics-based optimization. Neurocomputing 532:183–214. https://doi.org/10.1016/j.neucom.2023.02.010

Talatahari S, Azizi M, Gandomi AH (2021) Material generation algorithm: a novel metaheuristic algorithm for optimization of engineering problems. Processes 9(5):859. https://doi.org/10.3390/pr9050859

Markelle Kelly, Rachel Longjohn, Kolby Nottingham, The UCI Machine Learning Repository, https://archive.ics.uci.edu

Too J, Mafarja M, Mirjalili S (2021) Spatial bound whale optimization algorithm: an efficient high-dimensional feature selection approach. Neural Comput Appl 33:16229–16250. https://doi.org/10.1007/s00521-021-06224-y

Wang L, Zhang Y, Feng J (2005) On the Euclidean distance of images. IEEE Trans Pattern Anal Mach Intell 27(8):1334–1339. https://doi.org/10.1109/TPAMI.2005.165

Wang D, Tan D, Liu L (2018a) Particle swarm optimization algorithm: an overview. Soft Comput 22:387–408. https://doi.org/10.1007/s00500-016-2474-6

Wang H, Hu Z, Sun Y, Su Q, Xia X (2018b) Modified backtracking search optimization algorithm inspired by simulated annealing for constrained engineering optimization problems. Comput Intell Neurosci. https://doi.org/10.1155/2018/9167414

Wang S, Hussien AG, Jia H, Abualigah L, Zheng R (2022) Enhanced remora optimization algorithm for solving constrained engineering optimization problems. Mathematics 10(10):1696. https://doi.org/10.3390/math10101696

Wu D, Rao H, Wen C, Jia H, Liu Q, Abualigah L (2022) Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics 10(22):4350. https://doi.org/10.3390/math10224350

Yang, X. S. (2011). Metaheuristic optimization: algorithm analysis and open problems. In International symposium on experimental algorithms (pp. 21–32). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-20662-7_2

Yang, X. S. (2012, September). Flower pollination algorithm for global optimization. In International conference on unconventional computing and natural computation (pp. 240–249). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-32894-7_27

Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24:169–174. https://doi.org/10.1007/s00521-013-1367-1

Yeniay Ö (2005) Penalty function methods for constrained optimization with genetic algorithms. Math Comput Appl 10(1):45–56. https://doi.org/10.3390/mca10010045

Yıldız BS, Kumar S, Panagant N, Mehta P, Sait SM, Yildiz AR, Mirjalili S (2023) A novel hybrid arithmetic optimization algorithm for solving constrained optimization problems. Knowledge-Based Syst 271:110554. https://doi.org/10.1016/j.knosys.2023.110554

Yuan Y, Shen Q, Wang S, Ren J, Yang D, Yang Q, Mu X (2023) Coronavirus mask protection algorithm: a new bio-inspired optimization algorithm and its applications. J Bionic Eng. https://doi.org/10.1007/s42235-023-00359-5

Zhang Y, Jin Z (2020) Group teaching optimization algorithm: a novel metaheuristic method for solving global optimization problems. Expert Syst Appl 148:113246. https://doi.org/10.1016/j.eswa.2020.113246

Zhang YJ, Wang YF, Tao LW, Yan YX, Zhao J, Gao ZM (2022a) Self-adaptive classification learning hybrid JAYA and Rao-1 algorithm for large-scale numerical and engineering problems. Eng Appl Artif Intell 114:105069. https://doi.org/10.1016/j.engappai.2022.105069

Zhang Y, Wang Y, Li S, Yao F, Tao L, Yan Y, Gao Z (2022b) An enhanced adaptive comprehensive learning hybrid algorithm of Rao-1 and JAYA algorithm for parameter extraction of photovoltaic models. Math Biosci Eng 19(6):5610–5637. https://doi.org/10.3934/mbe.2022263

Zhang YJ, Yan YX, Zhao J, Gao ZM (2022c) AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer. IEEE Access 10:10907–10933. https://doi.org/10.1109/ACCESS.2022.3144431

Zhang YJ, Yan YX, Zhao J, Gao ZM (2022d) CSCAHHO: chaotic hybridization algorithm of the Sine Cosine with Harris Hawk optimization algorithms for solving global optimization problems. PLoS ONE 17(5):e0263387. https://doi.org/10.1371/journal.pone.0263387

Zhang YJ, Wang YF, Yan YX, Zhao J, Gao ZM (2022e) LMRAOA: An improved arithmetic optimization algorithm with multi-leader and high-speed jum** based on opposition-based learning solving engineering and numerical problems. Alex Eng J 61(12):12367–12403. https://doi.org/10.1016/j.aej.2022.06.017

Zhao J, Zhang Y, Li S, Wang Y, Yan Y, Gao Z (2022) A chaotic self-adaptive JAYA algorithm for parameter extraction of photovoltaic models. Math Biosci Eng 19:5638–5670. https://doi.org/10.3934/mbe.2022264

Zhao S, Zhang T, Ma S, Wang M (2023) Sea-horse optimizer: a novel nature-inspired meta-heuristic for global optimization problems. Appl Intell 53(10):11833–11860. https://doi.org/10.1007/s10489-022-03994-3

Download references

Acknowledgements

The authors would like to thank the support of Fujian Key Lab of Agriculture IOT Application, IOT Application Engineering Research Center of Fujian Province Colleges and Universities, Guiding Science and Technology Projects in Sanming City (2023-G-5), Industry-University Cooperation Project of Fujian Province (2021H6039), Fujian Province Industrial Guidance (Key) Project (2022H0053), Sanming Major Science and Technology Project of Industry-University-Research Collaborative Innovation (2022-G-4), and also the anonymous reviewers and the editor for their careful reviews and constructive suggestions to help us improve the quality of this paper.

Author information

Authors and affiliations.

School of Information Engineering, Sanming University, Sanming, 365004, China

Heming Jia, Xuelian Zhou & Jinrui Zhang

Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman, 19328, Jordan

Laith Abualigah

MEU Research Unit, Middle East University, Amman, 11831, Jordan

Department of Mechanical Engineering, Bursa Uludağ University, 16059, Görükle, Bursa, Turkey

Ali Riza Yildiz

Department of Computer and Information Science, Linköping University, 58183, Linköping, Sweden

Abdelazim G. Hussien

You can also search for this author in PubMed   Google Scholar

Contributions

Heming Jia: Methodology, Formal analysis, Investigation, Resources, Funding acquisition, Project administration; Xuelian Zhou: Investigation, Conceptualization, Software, Data Curation, Writing—Original Draft; Jinrui Zhang: Validation, Conceptualization; Laith Abualigah: Supervision, Writing—Review & Editing; Ali Riza Yildiz: Visualization, Writing—Review & Editing; Abdelazim G. Hussien: Writing—Review & Editing.

Corresponding author

Correspondence to Heming Jia .

Ethics declarations

Competing interest.

We declare that we have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Jia, H., Zhou, X., Zhang, J. et al. Modified crayfish optimization algorithm for solving multiple engineering application problems. Artif Intell Rev 57 , 127 (2024). https://doi.org/10.1007/s10462-024-10738-x

Download citation

Accepted : 24 February 2024

Published : 24 April 2024

DOI : https://doi.org/10.1007/s10462-024-10738-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Crayfish Optimization Algorithm
  • Environmental updating mechanism
  • Ghost opposition-based learning strategy
  • Global optimization problem
  • Constrained engineering design problems
  • High dimensional feature selection
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Binary Search algorithm with example

    problem solving using binary search algorithm

  2. solve problems binary search

    problem solving using binary search algorithm

  3. Binary Search Algorithm with EXAMPLE

    problem solving using binary search algorithm

  4. Binary Search Algorithm

    problem solving using binary search algorithm

  5. PPT

    problem solving using binary search algorithm

  6. Search Algorithms : Linear and Binary Search

    problem solving using binary search algorithm

VIDEO

  1. 26- Binary Search

  2. How Can I Implement Binary Search in Python Quickly?

  3. Search an Element in an Array Using Binary Search

  4. Day-27/100 Binary Search LeetCode Coding In Java #leetcodesolutions

  5. E01 : Aggressive Cows

  6. Binary Search Algorithm

COMMENTS

  1. Binary Search (With Code)

    Binary Search is a fast and efficient way to find an element in a sorted array. In this tutorial, you will learn how binary search works, how to implement it in C, C++, Java, and Python, and how to analyze its time complexity. Whether you are preparing for a coding interview or want to improve your algorithm skills, this tutorial will help you master binary search.

  2. How to Identify & Solve Binary Search Problems?

    Problem is monotonic in nature and can be solved using binary search. Solution of the Problem using Binary search: low = 0,as minimum size of subarray can be zero. high = length of the array, as subarray size can be up to length of array. Calculate mid value as (low+high)/2 and check f (mid) returns true or false.

  3. Binary Search

    Iterative Binary Search Algorithm; Recursive Binary Search Algorithm; Given below are the pseudocodes for the approaches. 1. Iterative Binary Search Algorithm: Here we use a while loop to continue the process of comparing the key and splitting the search space in two halves. Implementation of Iterative Binary Search Algorithm: C++

  4. Binary Search: Practice Problems

    Binary Search is a Divide and Conquer algorithm. Like all divide-and-conquer algorithms, binary search first divides a large array into two smaller subarrays and then recursively (or iteratively)…

  5. Binary search (article)

    Binary search is an efficient algorithm for finding an item from a sorted list of items. It works by repeatedly dividing in half the portion of the list that could contain the item, until you've narrowed down the possible locations to just one. We used binary search in the guessing game in the introductory tutorial.

  6. 6.4. The Binary Search

    6.4. The Binary Search¶. It is possible to take greater advantage of the ordered list if we are clever with our comparisons. In the sequential search, when we compare against the first item, there are at most \(n-1\) more items to look through if the first item is not what we are looking for. Instead of searching the list in sequence, a binary search will start by examining the middle item.

  7. How to think in an advanced Binary Search problem

    A lot of times it won't be as simple as just finding a number in a sorted array but if you can break it down to these three conditions the problem will basically be that simple. let left = 0 ...

  8. Binary Search Algorithm

    The idea is to use binary search which is a Divide and Conquer algorithm. Like all divide-and-conquer algorithms, binary search first divides a large array into two smaller subarrays and then recursively (or iteratively) operate the subarrays. But instead of working on both subarrays, it discards one subarray and continues on the second ...

  9. Everything You Need to Know About the Binary Search Algorithm

    Binary Search Algorithm Iteration 1 (Image by author inspired by Mike Buss [7]). We define the search space by its start and end indices called low and high.We set the search space by assigning the low to the index of the first element in the array (0) and the high to the index of the last element in the array (8).. We get the index of the middle element of the array mid by using the formula ...

  10. Binary Search Practice Problems Algorithms

    123. Solve practice problems for Binary Search to test your programming skills. Also go through detailed tutorials to improve your understanding to the topic. | page 1.

  11. Binary Search Tutorials & Notes

    Tutorial. Binary search is the most popular Search algorithm.It is efficient and also one of the most commonly used techniques that is used to solve problems. If all the names in the world are written down together in order and you want to search for the position of a specific name, binary search will accomplish this in a maximum of 35 iterations.

  12. Introduction to Divide and Conquer With Binary Search

    This lesson introduces the divide and conquer technique of solving a problem using binary search in a sorted list. Divide and conquer is an algorithmic paradigm in which the problem is repeatedly divided into subproblems until we reach a point where each problem is similar and atomic, i.e., can't be further divided.

  13. Binary search algorithm

    In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half ...

  14. Search Algorithms

    Linear or Sequential Search. This algorithm works by sequentially iterating through the whole array or list from one end until the target element is found. If the element is found, it returns its index, else -1. Now let's look at an example and try to understand how it works: arr = [2, 12, 15, 11, 7, 19, 45] Suppose the target element we want ...

  15. Solving Problems with Binary Search

    Problem 1: ( Link to problem | Solution Code ) Explanation: For the problem at hand, let us define a function F (x) such that. Now it is easy to see that if F (x)=0, F (y)=0 for all y>x. Thus, the problem satisfies the monotonicity condition necessary for binary search.

  16. Divide and Conquer Algorithms: Binary Search and Binary Search Trees

    The binary search algorithm takes time to complete, indicated by its time complexity. The worst-case time complexity is O(log N) . This means that as the number of values in a dataset increases, the performance time of the algorithm (the number of comparisons) increases as a function of the base-2 logarithm of the number of values.

  17. Binary Search Programming Practice Problem Course Online

    Binary search is an efficient search algorithm for sorted data. Learning this is beneficial as it dramatically reduces the time complexity to logarithmic, making it incredibly fast for large scale data. 4.1 (7 reviews) 10 Problems Intermediate level. 2.6k Learners. Start My Journey.

  18. Solving problems with Binary Search algorithm

    Binary search algorithm is widely used. It can be very simple but you can be confused with some implementation details. If you check the Leetcode website you will see that binary search is used to ...

  19. 6.4. The Binary Search

    The Binary Search — Problem Solving with Algorithms and Data Structures using C++. 6.4. The Binary Search ¶. It is possible to take greater advantage of the ordered vector if we are clever with our comparisons. In the sequential search, when we compare against the first item, there are at most n − 1 more items to look through if the first ...

  20. Searching Algorithms

    Searching algorithms are essential tools in computer science used to locate specific items within a collection of data. These algorithms are designed to efficiently navigate through data structures to find the desired information, making them fundamental in various applications such as databases, web search engines, and more. Searching Algorithm.

  21. Optimal Binary Search Tree

    Optimal Binary Search Tree extends the concept of Binary searc tree. Binary Search Tree (BST) is a nonlinear data structure which is used in many scientific applications for reducing the search time. In BST, left child is smaller than root and right child is greater than root. This arrangement simplifies the search procedure.

  22. Python Program for Binary Search (Recursive and Iterative)

    Python Program for Binary Search Using the built-in bisect module. Step by step approach: The code imports the bisect module which provides support for binary searching. The binary_search_bisect () function is defined which takes an array arr and the element to search x as inputs. The function calls the bisect_left () function of the bisect ...

  23. Evolution inspired binary flower pollination for the ...

    The present paper introduces a modified flower pollination algorithm (FPA) enhanced by evolutionary operators to solve the uncapacitated facility location problem (UFLP), which is one of the well-known location science problems. The aim in UFLP is to select some locations to open facilities among a certain number of candidate locations so as to minimize the total cost, which is the sum of ...

  24. Modified crayfish optimization algorithm for solving multiple

    Crayfish Optimization Algorithm (COA) is innovative and easy to implement, but the crayfish search efficiency decreases in the later stage of the algorithm, and the algorithm is easy to fall into local optimum. To solve these problems, this paper proposes an modified crayfish optimization algorithm (MCOA). Based on the survival habits of crayfish, MCOA proposes an environmental renewal ...