Np Complete Vs Np Hardclevelandmultifiles



  • Design and Analysis of Algorithms
  1. Np Complete Vs Np Hardclevelandmultifiles Nurse Practitioner
  2. Np Complete Vs Np Hardclevelandmultifiles Nursing
  • Basics of Algorithms
Hardclevelandmultifiles
  • Design Strategies
  • Graph Theory
  • Heap Algorithms
  • Sorting Methods
  • Complexity Theory

NP Not much is known, unfortunately Can think of NP as the ability to appreciate a solution, P as the ability to produce one P NP Dont even know if NP closed under complement, i.e. Does L NP imply Ḹ NP? Proving Decision Problems NP-Complete NP-completeness is a useful concept for showing the di culty of a computational problem, by showing that the existence of a polynomial-time algorithm for the problem would imply that P= NP. This handout reviews the key steps in constructing a proof of NP-completeness for a problem.

  • DAA Useful Resources
  • Selected Reading
  1. CoNP-complete Definition A language L is said to be coNP-complete if L ∈ coNP, for any A ∈ coNP, A ≤ L Theorem If L is NP-complete, then is coNP-completeL Proof Let L be NP-complete and A ∈ coNP. Then, and therefore is poly-time A ∈NP reducible to L with a reduction R.
  2. The previous theorem amounts to a strategy for proving new problems to be NP complete. Specifically, to prove a new problem R to be NP-complete, the following steps are sufficient: Prove R to be NP Find an already known NP-complete problem R 0, and come up with a transform that reduces R 0 to R. For this strategy to become effective, we need at.
  3. And there are tons of np complete problems. And a lot of extremely smart people, have had np complete problems that they've really cared about, and either on purpose or accidentally, they've been trying to develop efficent algorithms for them. Noone has ever succeeded in over 1/2 century of serious computational work. The second reason is sort.

A problem is in the class NPC if it is in NP and is as hard as any problem in NP. A problem is NP-hard if all problems in NP are polynomial time reducible to it, even though it may not be in NP itself.

If a polynomial time algorithm exists for any of these problems, all problems in NP would be polynomial time solvable. These problems are called NP-complete. The phenomenon of NP-completeness is important for both theoretical and practical reasons.

Definition of NP-Completeness

A language B is NP-complete if it satisfies two conditions

  • B is in NP

  • Every A in NP is polynomial time reducible to B.

If a language satisfies the second property, but not necessarily the first one, the language B is known as NP-Hard. Informally, a search problem B is NP-Hard if there exists some NP-Complete problem A that Turing reduces to B.

The problem in NP-Hard cannot be solved in polynomial time, until P = NP. If a problem is proved to be NPC, there is no need to waste time on trying to find an efficient algorithm for it. Instead, we can focus on design approximation algorithm.

NP-Complete Problems

Np Complete Vs Np Hardclevelandmultifiles

Following are some NP-Complete problems, for which no polynomial time algorithm is known.

  • Determining whether a graph has a Hamiltonian cycle
  • Determining whether a Boolean formula is satisfiable, etc.

Np Complete Vs Np Hardclevelandmultifiles Nurse Practitioner

NP-Hard Problems

The following problems are NP-Hard

  • The circuit-satisfiability problem
  • Set Cover
  • Vertex Cover
  • Travelling Salesman Problem

In this context, now we will discuss TSP is NP-Complete

TSP is NP-Complete

The traveling salesman problem consists of a salesman and a set of cities. The salesman has to visit each one of the cities starting from a certain one and returning to the same city. The challenge of the problem is that the traveling salesman wants to minimize the total length of the trip

Proof

Np Complete Vs Np Hardclevelandmultifiles Nursing

To prove TSP is NP-Complete, first we have to prove that TSP belongs to NP. In TSP, we find a tour and check that the tour contains each vertex once. Then the total cost of the edges of the tour is calculated. Finally, we check if the cost is minimum. This can be completed in polynomial time. Thus TSP belongs to NP.

Secondly, we have to prove that TSP is NP-hard. To prove this, one way is to show that Hamiltonian cycle ≤p TSP (as we know that the Hamiltonian cycle problem is NPcomplete).

Assume G = (V, E) to be an instance of Hamiltonian cycle.

Hence, an instance of TSP is constructed. We create the complete graph G' = (V, E'), where

$$E^{'}=lbrace(i, j)colon i, j in V ::and:ineq j$$

Thus, the cost function is defined as follows −

$$t(i,j)=begin{cases}0 & if: (i, j): in E1 & otherwiseend{cases}$$

Now, suppose that a Hamiltonian cycle h exists in G. It is clear that the cost of each edge in h is 0 in G' as each edge belongs to E. Therefore, h has a cost of 0 in G'. Thus, if graph G has a Hamiltonian cycle, then graph G' has a tour of 0 cost.

Conversely, we assume that G' has a tour h' of cost at most 0. The cost of edges in E' are 0 and 1 by definition. Hence, each edge must have a cost of 0 as the cost of h' is 0. We therefore conclude that h' contains only edges in E.

We have thus proven that G has a Hamiltonian cycle, if and only if G' has a tour of cost at most 0. TSP is NP-complete.

Science and technology journalists pride themselves on the ability to explain complicated ideas in accessible ways, but there are some technical principles that we encounter so often in our reporting that paraphrasing them or writing around them begins to feel like missing a big part of the story. So in a new series of articles called 'Explained,' MIT News Office staff will explain some of the core ideas in the areas they cover, as reference points for future reporting on MIT research.
In the 1995 Halloween episode of The Simpsons, Homer Simpson finds a portal to the mysterious Third Dimension behind a bookcase, and desperate to escape his in-laws, he plunges through. He finds himself wandering across a dark surface etched with green gridlines and strewn with geometric shapes, above which hover strange equations. One of these is the deceptively simple assertion that P = NP.
In fact, in a 2002 poll, 61 mathematicians and computer scientists said that they thought P probably didn’t equal NP, to only nine who thought it did — and of those nine, several told the pollster that they took the position just to be contrary. But so far, no one’s been able to decisively answer the question one way or the other. Frequently called the most important outstanding question in theoretical computer science, the equivalency of P and NP is one of the seven problems that the Clay Mathematics Institute will give you a million dollars for proving — or disproving. Roughly speaking, P is a set of relatively easy problems, and NP is a set that includes what seem to be very, very hard problems, so P = NP would imply that the apparently hard problems actually have relatively easy solutions. But the details are more complicated.
Computer science is largely concerned with a single question: How long does it take to execute a given algorithm? But computer scientists don’t give the answer in minutes or milliseconds; they give it relative to the number of elements the algorithm has to manipulate.
Imagine, for instance, that you have an unsorted list of numbers, and you want to write an algorithm to find the largest one. The algorithm has to look at all the numbers in the list: there’s no way around that. But if it simply keeps a record of the largest number it’s seen so far, it has to look at each entry only once. The algorithm’s execution time is thus directly proportional to the number of elements it’s handling — which computer scientists designate N. Of course, most algorithms are more complicated, and thus less efficient, than the one for finding the largest number in a list; but many common algorithms have execution times proportional to N2, or N times the logarithm of N, or the like.
A mathematical expression that involves N’s and N2s and N’s raised to other powers is called a polynomial, and that’s what the “P” in “P = NP” stands for. P is the set of problems whose solution times are proportional to polynomials involving N's.
Obviously, an algorithm whose execution time is proportional to N3 is slower than one whose execution time is proportional to N. But such differences dwindle to insignificance compared to another distinction, between polynomial expressions — where N is the number being raised to a power — and expressions where a number is raised to the Nth power, like, say, 2N.
If an algorithm whose execution time is proportional to N takes a second to perform a computation involving 100 elements, an algorithm whose execution time is proportional to N3 takes almost three hours. But an algorithm whose execution time is proportional to 2N takes 300 quintillion years. And that discrepancy gets much, much worse the larger N grows.
NP (which stands for nondeterministic polynomial time) is the set of problems whose solutions can be verified in polynomial time. But as far as anyone can tell, many of those problems take exponential time to solve. Perhaps the most famous exponential-time problem in NP, for example, is finding prime factors of a large number. Verifying a solution just requires multiplication, but solving the problem seems to require systematically trying out lots of candidates.
So the question “Does P equal NP?” means “If the solution to a problem can be verified in polynomial time, can it be found in polynomial time?” Part of the question’s allure is that the vast majority of NP problems whose solutions seem to require exponential time are what’s called NP-complete, meaning that a polynomial-time solution to one can be adapted to solve all the others. And in real life, NP-complete problems are fairly common, especially in large scheduling tasks. The most famous NP-complete problem, for instance, is the so-called traveling-salesman problem: given N cities and the distances between them, can you find a route that hits all of them but is shorter than … whatever limit you choose to set?
Given that P probably doesn’t equal NP, however — that efficient solutions to NP problems will probably never be found — what’s all the fuss about? Michael Sipser, the head of the MIT Department of Mathematics and a member of the Computer Science and Artificial Intelligence Lab’s Theory of Computation Group (TOC), says that the P-versus-NP problem is important for deepening our understanding of computational complexity.
“A major application is in the cryptography area,” Sipser says, where the security of cryptographic codes is often ensured by the complexity of a computational task. The RSA cryptographic scheme, which is commonly used for secure Internet transactions — and was invented at MIT — “is really an outgrowth of the study of the complexity of doing certain number-theoretic computations,” Sipser says.
Similarly, Sipser says, “the excitement around quantum computation really boiled over when Peter Shor” — another TOC member — “discovered a method for factoring numbers on a quantum computer. Peter's breakthrough inspired an enormous amount of research both in the computer science community and in the physics community.” Indeed, for a while, Shor’s discovery sparked the hope that quantum computers, which exploit the counterintuitive properties of extremely small particles of matter, could solve NP-complete problems in polynomial time. But that now seems unlikely: the factoring problem is actually one of the few hard NP problems that is not known to be NP-complete.
Sipser also says that “the P-versus-NP problem has become broadly recognized in the mathematical community as a mathematical question that is fundamental and important and beautiful. I think it has helped bridge the mathematics and computer science communities.”
But if, as Sipser says, “complexity adds a new wrinkle on old problems” in mathematics, it’s changed the questions that computer science asks. “When you’re faced with a new computational problem,” Sipser says, “what the theory of NP-completeness offers you is, instead of spending all of your time looking for a fast algorithm, you can spend half your time looking for a fast algorithm and the other half of your time looking for a proof of NP-completeness.”
Sipser points out that some algorithms for NP-complete problems exhibit exponential complexity only in the worst-case scenario and that, in the average case, they can be more efficient than polynomial-time algorithms. But even there, NP-completeness “tells you something very specific,” Sipser says. “It tells you that if you’re going to look for an algorithm that’s going to work in every case and give you the best solution, you’re doomed: don’t even try. That’s useful information.”