I am studying perception learning, and learn the convergence proof for the algorithm with the following link(https://www.cse.iitb.ac.in/~shivaram/teaching/cs344+386-s2017/resources/classnote-1.pdf): as the Assumption 1 (Linear Separability) as figure shown. I don't know why the ||w*||=1 or why this condition is necessary , Could you help me understand it?thanks!
perceptron learning algorithm : convergence proof for the algorithm, why the ||w*|| is equal 1 or this condition is necessary?
261 Views Asked by tktktk0711 At
1
There are 1 best solutions below
Related Questions in ALGORITHM
- Two different numbers in an array which their sum equals to a given value
- Given two arrays of positive numbers, re-arrange them to form a resulting array, resulting array contains the elements in the same given sequence
- Time complexity of the algorithm?
- Find a MST in O(V+E) Time in a Graph
- Why k and l for LSH used for approximate nearest neighbours?
- How to count the number of ways of choosing of k equal substrings from a List L(the list of All Substrings)
- Issues with reversing the linkedlist
- Finding first non-repeating number in integer array
- Finding average of an array
- How to check for duplicates with less time in a list over 9000 elements by python
- How to pick a number based on probability?
- Insertion Sort help in javascript -- Khan Academy
- Developing a Checkers (Draughts) engine, how to begin?
- Can Bellman-Ford algorithm be used to find shorthest path on a graph with only positive edges?
- What is the function for the KMP Failure Algorithm?
Related Questions in MACHINE-LEARNING
- How to cluster a set of strings?
- Enforcing that inputs sum to 1 and are contained in the unit interval in scikit-learn
- scikit-learn preperation
- Spark MLLib How to ignore features when training a classifier
- Increasing the efficiency of equipment using Amazon Machine Learning
- How to interpret scikit's learn confusion matrix and classification report?
- Amazon Machine Learning for sentiment analysis
- What Machine Learning algorithm would be appropriate?
- LDA generated topics
- Spectral clustering with Similarity matrix constructed by jaccard coefficient
- Speeding up Viterbi execution
- Memory Error with Classifier fit and partial_fit
- How to find algo type(regression,classification) in Caret in R for all algos at once?
- Difference between weka tool's correlation coefficient and scikit learn's coefficient of determination score
- What are the approaches to the Big-Data problems?
Related Questions in PERCEPTION
- Is there any distance when comparing two sets (or palettes) of colors?
- How would one know if one saw a random number generator?
- perceptron learning algorithm : convergence proof for the algorithm, why the ||w*|| is equal 1 or this condition is necessary?
- How to generate n different colors for any natural number n?
- How to approximate CSS box-shadow property using solid border only?
- Using perceptually uniform colormaps in Mayavi volumetric visualization
- Sorting (CIE)LCh colors
- Virtual Reality: Word for conflict in depth queue
- What is the shortest perceivable application response delay?
- How fast should a dynamically generated web page be created?
- Face recognition Intel Perceptual Computing
- jQuery: Increase Perceived Responsiveness During the Ready Event
- Perceptual Hash Algorithms in Python or PHP?
- I'm trying to detect a square using Point cloud library. I have pcl data from a 3D lidar in which I need to find squares
- Why does the WCAG contrast formula use the luminance and not the perceived lightness?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?

The norm assumption is there for the simplicity of the analysis only, it is easy to show that the assumption is not necessary since droping it actually implies it.
Lets assume that there exists w (||w|| = Z > 0), gamma>0 such that
then for the same gamma:
thus
so for w* = w/||w|| (so ||w*||=1), and gamma* = gamma / |Z| > 0
which concludes the proof that if there exists any w (with arbitrary norm Z) and gamma, then there also exists w* with norm 1 (and simply we have to divide the original gamma by Z) and gamma*=gamma/Z.
The only reason to do it this way is to make constants in the proof simpler, but the assumption itself is redundant.