Monday, December 04, 2006

How to find free mp3s using





Hi...
The other day i was chatting with ma friend....he kept on bugging me with just one question.."anky...how can we get free music on net"...i tried explaining him about forums ...but then he said "oo..the forums require the signups....thats so irritatiing...cant we use Google instead?" The very word strucks chords in ma head....n BINGO!!!this is what i landed up with.....

1) Go to Google.com :|

2) Now enter the following query into the Search box.

?intitle:index.of? mp3 [artist] [title]
Replace [artist] with the artist/singer/band you’re looking for and [title] with the song title. Just one of them also fine. For example:
?intitle:index.of? mp3 coldplay
?intitle:index.of? mp3 “simple plan”
?intitle:index.of? mp3 “simple plan” untitled
?intitle:index.of? mp3 “welcome to my life”
3) Sample search below:

4) If you click on the on the link above...you will get this....(This is just a sample....depends on the file ur searching)
Now you only need to save the file using the usual Right Click - Save As method :).

Hope all you people like this trick.....Have fun....and may you fill your HDD with all the music you want...



Saturday, December 02, 2006

Destroy your Website

Here’s one of my wonderful time-wasters that I pass on to people when I lack the time to write anything really substantial. So yes, this is a cop-out.

Anyway, NetDisaster is a cool site. Basically, you go there and enter in any website URL you like, choose your particular disaster, then let it run. You can choose everything from worms to acid pee,from dinosaurs to puke. Enjoy.

Here’s mine, tomatoes and a boy peeing.


Friday, December 01, 2006

‘TOP 11 Tools for Sending BIG Files’ released


You, me and I bet every webbie once in while needs to send over a file that’s bigger then max. allowed email attachment (e.g. video clip, movie, software program, etc). So 2 days ago I decided to make a list of somewhat better websites and tools for sending big files. After some extensive searching I got more then 100 sites, obviously this was too much so I filtered them further by canceling out ones which aren’t free, simple to use and with max.file size mark below 500 mb. And below is what I got afterwards;


max. size days before file deleted Sign-up Required
YouSendIt 1 gb 7 No
Gigasize 1.5 gb 90 days Yes
TransferBigFiles 1 gb 5 days No
MegaShares 1.5 gb 7 days No
BigUpload. 500 mb 30 days after last download No
MailBigFile 500 mb 7 No
FileUpload 500 mb 7 No
Zupload 500 mb 30 days after last download No
Spread-it
.
500 mb
.
14 days
.
No
.

If you send/receive big files frequently and do so among friends then instead of using above ones I recommend to install one of the following programs, either of which is free and user friendly in its own way.

AllPeers

AllPeers is a firefox plugin that lets you easily send/receive files among friends. After installing it you get small allpeers icon on your browser, which opens so called navigator menu displaying list of contacts on the side of your browser window. To send file all you need to do is drag and drop it over the contact name. And because in this case file transfer takes place directly between your pc and recipient, there is no limitations on file size.
Download(Mac/Windows)

Pando

Pando(Mac/Windows) is a small desktop program with the same drag and drop file sending function. Easy to use, processor friendly and has really cool design(image below). Installation file includes plugin for microsoft outlook. And if needed plugins for yahoo messenger and skype can be downloaded separately.


Download(Mac/Windows)


Flash Site Of the Day


The other day i was browsing thru the www i suddenly bumped into this website.This website displays one of the most amazing feats i have ever seen on the website publishing business.
Just hover your mouse over the area and see how the ink blots just like in the rice papers.....kip on clicking the mouse for experimenting with new colours.....and whu knows....u could just create another MODERN ART.....after all modern art is sumthing thats just beyond my understanding.....a boring painting with no heads n tails....being adjudged the best of all the arts...God...knows..what the admirers find in it.....
Anyways you guys keep on tryin to make ur modern art..while i'' do ma work of testing another sites..n bringing onto them for u...

Website:::.http://www.jacksonpollock.org/



Monday, November 27, 2006

Google + Firefox = Free Music Downloads







There’s a lot you can do with Google if you can take advantage of it’s advanced search features. Expanding on this tip which shows you how to find music in open directories, here’s a step-by-step walkthrough on how to use Firefox Smart Keyword searches to speed up the process.

All you need to do is:


  1. Create a bookmark in Firefox
  2. Use the URLs in this text file as the bookmark location in your Firefox bookmarks (like in the pic above)
  3. Assign a keyword to it - eg. music
  4. Type the keyword (eg. music) followed by the search term (eg. beatles) directly into the address bar (NOT the search box). For example… type in music beatles , and Google will search open directories for Beatles music files that you can download




You can modify the code in the text file to change file extensions, which opens a whole window for you. For example, you can even change it to PDF and DOC to look to e-books, or AVI and MPG to look for movies.

By the way, I’m assuming you already own the media you will be downloading :)


I hope u liked that.....



Saturday, November 18, 2006

How to download from You Tube?



every one knows that you tube provides all kinda videos from music to funny videos online and allows them to put on your site,but dosent allow to download by this simple trick you can download any you tube,googlevideos or yahoovideos or from any other site...


HOW TO DOWNLOAD FROM YOUTUBE

  • go here to http://www.youtube.com

  • find the permalink for the video you want (not the url in your address bar, but the url from the box a little below the video)

  • then go here: http://javimoya.com/blog/youtube_en.php

  • put that url you just copied in the top box and click Download From Youtube

  • a small link should come up at the top left and say "Video Download."

  • right click that and save the file as "filename.flv"
  • the flv part is important

  • wait for it to download to your computer.

  • then go here: http://www.download.com/CinemaForge/...-10373646.html

  • download that program.

  • once it's finished, open it, in the input section find the .ftv file you just downloaded.

  • then in the output file, choose the new file extension you want, mpeg, wmv, etc.

  • then wait for it to convert.

And finally enjoy the YouTube Videos……

So if u liked this post…please do drop in your comments…

Saturday, November 11, 2006

Google Technology....Unearthed..

About Google

Google, Inc. (NASDAQ: GOOG and LSE: GGEA) is an American public corporation, first incorporated as a privately held corporation on 7 September 1998, that designed and manages the Internet's most used search engine. The company has approximately 8,000 employees and is based in Mountain View, California. Eric Schmidt, former chief executive officer of Novell, was named Google's CEO when co-founder Larry Page stepped down.

The name "Google" originated from a misspelling of "googol,"which refers to 10100 (a 1 followed by one-hundred zeros). Google has had a major impact on online culture. The verb "google" was recently added to both the Merriam Webster Collegiate Dictionary and the Oxford English Dictionary, meaning "to use the Google search engine to obtain information on the Internet."

Google began as a research project in January, 1996 by Larry Page and Sergey Brin, two Ph.D. students at Stanford University, California. They hypothesized that a search engine that analyzed the relationships between websites would produce better results than existing techniques (existing search engines at the time essentially ranked results according to how many times the search term appeared on a page). It was originally nicknamed "BackRub" because the system checked backlinks to estimate a site's importance. A small search engine called RankDex was already exploring a similar strategy.
Google, being the most popular Internet search engine (over 50% market share), requires large computational resources in order to provide their service. This article describes Google's technological infrastructure, as presented in the company's public announcements.

Google!

Network topology

Google maintains an estimated 450,000 servers, arranged in racks located in clusters in cities around the world, with major centers in Mountain View, California; Virginia; Atlanta, Georgia; Dublin, Ireland; and a new facility constructed in 2006 in The Dalles, Oregon. When an attempt to connect to Google is made, Google's DNS servers perform load balancing to allow the user to access Google's content most rapidly. This is done by sending the user the IP address of a cluster that is not under heavy load, and is geographically proximate to them. Each cluster has a few thousand servers, and upon connection to a cluster further load balancing is performed by hardware in the cluster, in order to send the queries to the least loaded Web Server. This make Google one of the biggest and most complex known Content Delivery Network.

Racks are custom-made and contain 40 to 80 servers (20 to 40 1U servers on either side), new servers are 2U Rackmount systems. Each rack has a switch. Servers are connected via a 100 Mbit/s Ethernet link to the local switch. Switches are connected to core gigabit switch using one or two gigabit uplinks.

Server types

Google's server infrastructure is divided in several types, each assigned to a different purpose:

  • Google DNS Servers answer the DNS requests and serve as intelligent, worldwide load-balancer. They guess the data center nearest to the user to speed up all http requests.
  • Google Web Servers coordinate the execution of queries sent by users, then format the result into an HTML page. The execution consists of sending queries to index servers, merging the results, computing their rank, retrieving a summary for each hit (using the document server), asking for suggestions from the spelling servers, and finally getting a list of advertisements from the ad server.

Server hardware and software

Original hardware

The original hardware (ca. 1998) that was used by Google when it was located at Stanford University, included:

  • Sun Ultra II with dual 200MHz processors, and 256MB of RAM. This was the main machine for the original Backrub system.
  • 2 x 300 MHz Dual Pentium II Servers donated by Intel, they included 512MB of RAM and 9 x 9GB hard drives between the two. It was on these that the main search ran.
  • F50 IBM RS/6000 donated by IBM, included 4 processors, 512MB of memory and 8 x 9GB hard drives.
  • Two additional boxes included 3 x 9GB hard drives and 6 x 4GB hard drives respectively (the original storage for Backrub). These were attached to the Sun Ultra II.
  • IBM disk expansion box with another 8 x 9GB hard drives donated by IBM.
  • Homemade disk box which contained 10 x 9GB SCSI hard drives.

Current hardware

Servers are commodity-class x86 PCs running customized versions of GNU/Linux. Indeed, the goal is to purchase CPU generations that offer the best performance per unit of power, not absolute performance. Other than wages, the biggest cost that Google faces is electric power consumption. Estimates of the power required for over 450,000 servers range upwards of 20 megawatts, which could cost on the order of US$2 million per month in electricity charges.

For this reason, the Pentium II has been the most favoured processor, but this could change in the future as processor manufacturers are increasingly limited by the power output of their devices.

Specifications:

The exact size and whereabouts of the data centers Google uses are unknown, and official figures remain intentionally vague. In a 2000 estimate, Google's server farm consisted of 6000 processors, 12,000 common IDE disks (2 per machine, and one processor per machine), at four sites: two in Silicon Valley, California and two in Virginia.[3] Each site had an OC-48 (2488 Mbit/s) internet connection and an OC-12 (622 Mbit/s) connection to other Google sites. The connections are eventually routed down to 4 x 1 Gbit/s lines connecting up to 64 racks, each rack holding 80 machines and two ethernet switches.

Project 02

Google is currently developing a supercomputer at a data center located in the town of The Dalles, Oregon, on the Columbia River, approximately 80 miles from Portland. The project, codenamed Project 02, is expected to substantially add to their current global network capable of processing billions of search queries per day and a growing repertoire of other services. The new complex is approximately the size of two football fields with cooling towers four stories high. The project has already created hundreds of jobs in the area, mainly construction jobs at this point, with an expected 60 to 200 permanent positions later this year. Real estate prices in the small town of 12,000 have also increased by 40%.

Server operation

Most operations are read-only. When an update is required, queries are redirected to other servers, so as to simplify consistency issues. Queries are divided into sub-queries, where those sub-queries may be sent to different ducts in parallel, thus reducing the latency time.

In order to avoid the effects of unavoidable hardware failure, data stored in the servers may be mirrored using hardware RAID. Software is also designed to be fault tolerant. Thus when a system goes down, data is still available on other servers, which increases the reliability.

  • Data-gathering servers are permanently dedicated to spidering the Web. They update the index and document databases and apply Google's algorithms to assign ranks to pages.
  • Index servers each contain a set of index shards. They return a list of document IDs ("docid"), such that documents corresponding to a certain docid contain the query word. These servers need less disk space, but suffer the greatest CPU workload.
  • Document servers store documents. Each document is stored on dozens of document servers. When performing a search, a document server returns a summary for the document based on query words. They can also fetch the complete document when asked. These servers need more disk space.
  • Ad servers manage advertisements offered by services like AdWords and AdSense.
  • Spelling servers make suggestions about the spelling of queries.

PageRank is a patented method to assign a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is also called the PageRank of E and denoted by PR(E).

PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank [Vise and Malseed, 2005]) and Sergey Brin as part of a research project about a new kind of search engine. The project started in 1995 and led to a functional prototype, named Google, in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors which determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.

Some algorithm details

PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for any-size collection of documents. It is assumed in several research papers that the distribution is evenly divided between all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.

A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.

Simplified PageRank algorithm

Suppose a small universe of four web pages: A, B,C and D. The initial approximation of PageRank would be evenly divided between these four documents. Hence, each document would begin with an estimated PageRank of 0.25.

If pages B, C, and D each only link to A, they would each confer 0.25 PageRank to A. All PageRank PR( ) in this simplistic system would thus gather to A because all links would be pointing to A.

PR(A)= PR(B) + PR(C) + PR(D).\,

But then suppose page B also has a link to page C, and page D has links to all three pages. The value of the link-votes is divided among all the outbound links on a page. Thus, page B gives a vote worth 0.125 to page A and a vote worth 0.125 to page C. Only one third of D's PageRank is counted for A's PageRank (approximately 0.081).

PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\,

In other words, the PageRank conferred by an outbound link L( ) is equal to the document's own PageRank score divided by the normalized number of outbound links (it is assumed that links to specific URLs only count once per document).

PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \,

PageRank algorithm including damping factor

The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.

The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents in the collection) and this term is then added to the product of (the damping factor and the sum of the incoming PageRank scores).

That is,

PR(A)= 1 - d + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right)

or (N = the number of documents in collection)

PR(A)= {1 - d \over N} + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right) .

So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The second formula above supports the original statement in Page and Brin's paper that "the sum of all PageRanks is one". Unfortunately, however, Page and Brin gave the first formula, which has led to some confusion.

Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.

The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are all equally probable and are the links between pages.

If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. However, the solution is quite simple. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.

When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability of usually d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.

So, the equation is as follows:

PR(p_i) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j)}{L(p_j)}

where p1,p2,...,pN are the pages under consideration, M(pi) is the set of pages that link to pi, L(pj) is the number of links coming from page pj, and N is the total number of pages.

The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is

\mathbf{R} = \begin{bmatrix} PR(p_1) \\ PR(p_2) \\ \vdots \\ PR(p_N) \end{bmatrix}

where R is the solution of the equation

\mathbf{R} =  \begin{bmatrix} {(1-d)/ N} \\ {(1-d) / N} \\ \vdots \\ {(1-d) / N} \end{bmatrix}  + d  \begin{bmatrix} \ell(p_1,p_1) & \ell(p_1,p_2) & \cdots & \ell(p_1,p_N) \\ \ell(p_2,p_1) & \ddots & & \\ \vdots & & \ell(p_i,p_j) & \\ \ell(p_N,p_1) & & & \ell(p_N,p_N) \end{bmatrix}  \mathbf{R}

where the adjacency function \ell(p_i,p_j) is 0 if page pj does not link to pi, and normalised such that, for each j

\sum_{i = 1}^N \ell(p_i,p_j) = 1,

i.e. the elements of each column sum up to 1.

This is a variant of the eigenvector centrality measure used commonly in network analysis.

The values of the PageRank eigenvector are fast to approximate (only a few iterations are needed) and in practice it gives good results.

As a result of Markov theory, it can be shown that the PageRank of a page is the probability of being at that page after lots of clicks. This happens to equal t − 1 where t is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.

The main disadvantage is that it favors older pages, because a new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages). The Google Directory (itself a derivative of the Open Directory Project) is an exception in which PageRank is not used to determine search results rankings.

Several strategies have been proposed to accelerate the computation of PageRank.

Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which seeks to determine which documents are actually highly valued by the Web community.

Google is known to actively penalize link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools are among Google's trade secrets.

False or spoofed PageRank

While the PR shown in the Toolbar is considered to be accurate (at the time of publication by Google) for most sites, it must be noted that this value is also easily manipulated. A current flaw is that any low PageRank page that is redirected, via a 302 server header or a "Refresh" meta tag, to a high PR page causes the lower PR page to acquire the PR of the destination page. In theory a new, PR0 page with no incoming links can be redirected to the Google home page - which is a PR 10 - and by the next PageRank update the PR of the new page will be upgraded to a PR10. This is called spoofing and is a known failing or bug in the system. Any page's PR can be spoofed to a higher or lower number of the webmaster's choice and only Google has access to the real PR of the page. Spoofing is generally detected by running a Google search for a URL with questionable PR, as the results will display the URL of an entirely different site (the one redirected to) in its results.

Google's home page is often considered to be automatically rated a 10/10 by the Google Toolbar's PageRank feature, but its PageRank has at times shown a surprising result of only 8/10.

Source

www.wikipedia.org

AJAX

This is what GOOGLE is Based ON...

Ajax, shorthand for Asynchronous JavaScript and XML, is a web development technique for creating interactive web applications. The intent is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page's interactivity, speed, and usability.

The Ajax technique uses a combination of:

Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that refers to the use of a group of technologies together.

Pros

Bandwidth utilization

By generating the HTML locally within the browser, and only bringing down JavaScript calls and the actual data, Ajax web pages can appear to load quickly since the payload coming down is much smaller in size. An example of this technique is a large result set where multiple pages of data exist. With Ajax, the HTML of the page, e.g., a table control and related TD and TR tags can be produced locally in the browser and not brought down with the first page of data. If the user clicks other pages, only the data is brought down, and populated into the HTML generated in the browser.

Interactivity

Ajax applications are mainly executed on the user's machine, by manipulating the current page within their browser using document object model methods. Ajax can be used for a multitude of tasks such as updating or deleting records; expanding web forms; returning simple search queries; or editing category trees—all without the requirement to fetch a full page of HTML each time a change is made. Generally only small requests need to be sent to the server, and relatively short responses are sent back. This permits the development of more interactive applications featuring more responsive user interfaces due to the use of DHTML techniques.

While the Ajax platform is more restricted than the Java platform, current Ajax applications effectively fill part of the niche first served by Java applets: extending the browser with lightweight mini-applications.

Cons

Complex to learn.

Thursday, November 09, 2006

Finally....Windows Vista ..turning into a reality!!!

Microsoft announced today that Windows Vista has been released to manufacturing. Vista will mark the first big shift in Microsoft consumer operating systems in over 5 years. According to Neowin, the RTM version of Vista will be available for MSDN subscribers sometime after November 10, while businesses should begin receiving their copies on or shortly after November 30. Vista will be launched into the retail sector on January 30, 2007.


With Vista, Microsoft promises increased security with an improved firewall and Windows Defender and User Account Control. Other fresh additions include integrated desktop search, Internet Explorer 7.0, Windows Sidebar, Windows Sideshow, built-in system diagnostics, improved gaming support, fully integrated Speech Recognition as well as support for Windows SuperFetch, Windows ReadyBoost and Windows ReadyDrive.

Windows Vista will be available in four distinct retail versions:

  • Windows Vista Home Basic, $199/$99.95 (full/upgrade) - Provides a basic platform for home users who want to keep tabs on email and Internet activity. Comes standard with Vista’s new Search Explorer, Sidebar and Parental Controls.
  • Windows Vista Home Premium, $239/$159 - Builds on Home Basic by adding the Windows Aero interface, Windows Media Center functionality, Windows Tablet PC technology and integrated DVD burning.
  • Windows Vista Business, $299/$199 - Supports the Aero user interface, offers improved document managing and Windows Tablet PC functionality.
  • Windows Vista Ultimate, $399/$259 - Vista Ultimate combines the functionality of Vista Home Premium and Vista Business.

And then one very special version: Windows.Vista.Corporate.Edition.CRACKED-SomeGroup for $0.00 ;-)

Monday, November 06, 2006

RapidShare,MegaUpload and YouSendIt Hacks

Download RapidShare Link Grab Helper v1.0
-

Use download manger to download your rapidshare files and as many files downloading as you want (one file for every proxy you choose).

Step 1. Paste your RapidShare link (hxxp://rapidshare.de/fil..ar.html)
Step2. Choose your proxy (one proxy for each file if you want multiple download) or leave blank (as set in your IE)
Step3. Click on Grab
Step4. Type the 3 RED letters in the blank space.
Step5. Copy the RapidShare Direct link and paste it into your Download manager ( FlashGet or IDM, or IDA…etc). Or you can wait for the counter at the bottom to get to zero and click on " Send to FlashGet".

YouSendit Hack with Zango

The regular YouSendIt service stores files for 7 days or 25 downloads. But if you install the Zango Search Assistant tool, YouSendIt service stores files for 14 days or 25,000 downloads, whichever comes first. Zango Search Assistant shows relevant ads by 180solutions.

Rapidshare Hacks

How to disable the rapidshare.de download counter in IE or Firefox

While waiting for download, type javascript:c(countdown = 0); in the browser addressbar to eliminate the rapidshare countdown (waiting) feature. If this trick doesn't work, try the alternative below:

1. Click the Free button to initiate the download for rapidshare website
2. As the countdown timer begins, type the following URL in the location bar and press enter or click the Go button. The rapidshare direct download link should appear immediately.


The third option to byepass waiting time in Firefox is by using Greasmonkey extension and installing rapidshare.user.js script (rapidshare no wait). RapidLeecher is another PHP script for Rapidshare.de, Megaupload and MyTempDir services for immediate and simultaneous multiple downloads using proxy servers.

Byepass the Rapidshare data download limit

Rapidshare limits each user to a certain amount of downloading per day based on the users IP address. You can easily cheat rapidshare by showing a different IP address.

1. Clear your browser cookies.
2. Open the command prompt (Start - Run - cmd.exe)
3. Run the following simple commands:
ipconfig /flushdns
ipconfig /release
ipconfig /renew
4. Type exit to close the DOS window. Restart the rapidshare download job.

This trick may not work if your ISP has assigned you a static IP address. (BSNL assigns a dynamic IP)

If Rapidshare block your IP, change proxy

Rapidshare might block an IP from download for sometime. To byepass this restriction, change your IP. First obtain a IP and it's port from publicproxyservers.com. Then use that IP and port in your browser Connection settings window. Click OK.

Download files from Rapidshare like a Premium user

Install the Flashgot extension to enable integration of Firefox with Flashget download manager. Go to a rapidshare download link once the timer is done right click on it and choose FlashGot Link from the context menu. Flashget will launch the download window. Choose only 1 split file for the download to work. This works on RAPIDSHARE even without a premium account.

Rapidshare Free Premium Account

There are two ways to get a free rapidshare premium account. Either keep visiting the rapidshare website several times a day and notice a crown icon that points to rapidshare.de/cgi-bin/freeaccount.cgi - Once you enter the email and the rapidshare word Captcha, you are sent a rapidshare account to your email address from RapidShare 1-click hosting.

You have successfully joined the RapidShare-Community! You can now start downloading files with PREMIUM-privileges. Please make sure you have cookies enabled.

Even after your free time is over, you can always extend your membership FOR FREE! Just upload files in your premium-zone and spread the links. You will get premium-points when your files are downloaded. You can then use those points to extend your account.

You can also continue uploading files and collecting points when your download-time is over. We will delete accounts 45 days after expiration, so enough time to collect points and extend for free!

You may use download-accelerators as well! The easiest way is to activate direct-downloads in your premium-zone, and then just add RapidShare-links to the queue of your favourite download-manager. Please do not forget to add your login-data in your download-manager as well.

It's impractical to visit the same page again and again in anticipation of a free account. The other good way to get a free account is to download rapcheck, a small rapidshare software tool that runs in the background and check for free accounts at rapid intervals.

Megaupload Hacks

Search with Google for files on rapidshare or Megaupload servers
The uploaded filename is contained in the Rapidshare and megaupload URLs. This will help us find what is posted on rapidshare/megaupload for download.

To search for ebooks and documents in PDF format on Rapidshare:
pdf "rapidshare.de/files" site:rapidshare.de
To download movies and video files:
+inurl:avi|mpg|wmv site:rapidshare.de
To download mp3 files from rapidshare:
+inurl:wma|mp3 site:rapidshare.de
To download software, zipped files, programs from rapidshare:
+inurl:exe|rar|zip site:rapidshare.de

Replace rapidshare.de with megaupload.com to search for files available on MegaUpload.com servers.

Yousendit Hacks

Disable the Yousendit 25 downloads limit

When a file is uploaded to yousendit, it displays a link to your file when the upload process is complete. The link generally looks like the one below:

http://s8.yousend...83E2C899DB721

Append http://anonym.to/? to the above yousendit link, the new URL should now look like:

http://anonym.to/?http://s8.yousend...83E2C899DB721

Load the URL in your browser and download should begin. This trick will not byepass the seven day limit

Unlimited downloads from Yousendit

The YouSendIt link looks like the one below:
http://s12.yousendit.com/d.aspx?id=46FK4...82KLDFS78D

See the letter "d" after "yousendit.com/" ? Change that letter "d" to an "e" and the link will never die. The d.aspx has the counter on. if it finds there has been more than 25 hits then it refers the user to expired.aspx otherwise it refers you to e.aspx. e.aspx has the download link and does not add any more hits. the link may stay for longer but will not eliminate the 7 day period.

YouSendit Download Manager

You can use most download managers when downloading files from YouSendIt. You can use YSIGet Download Manager. If the download stops before completion, you need to resume a stopped download - click on the original YouSendIt download link to re-open the browser window.

Rapidshare RapidLeech Script

Rapidleech script transfers files from rapidshare and megaupload via your fast servers connection speed and dumps the file on your server.
http://rapidleech.com/

Rapidshare Megaupload Download Manager

After you choose the download location of the files you're about to download, you just have to insert the link of the file hosted in rapidshare.de servers, and click enter, and the program automatically starts to download the file, you just wait about 15 to 45 seconds for the download to start.



See the screenshot above to see the contents of All In One Rapidshare Hacks software. Also download Rapidshare Instant Downloader and Rapidshare Leecher v 3.0 PHP scripts.

RapidShare introduces Word CAPTCHA

You have requested the file Adobe-Acrobat.pdf (3820923 Bytes). This file has been downloaded 440 times already. IMPORTANT: Download-accelerators are only supported with a PREMIUM-Account! Only free-users: Please enter a three letter CAPTCHA here.

Download Rapidshare & Megaupload Files Searching Plugin for searching RapidShare and MegaUpload file servers from your Firefox or IE browsers.

Rapidshare introduces a free RapidShare folders service - Create your own interactive directories with any RapidShare-files you want. The biggest advantage is that your users can instantly see if all files are available for download or not. You may create as many folders as you want. Even sub-folders are possible. You can also protect your folders and sub-folders with a password. Of course you can edit your folders at anytime.

Free Rapidshare Premium Account

To login into the RapidShare - PREMIUM-Zone without using your credit card or PayPal, During certain periods, Rapidshare provides free premium account to normal users. The free premium rapidshare links are valid for 24 hours. The URL to try is rapidshare.de/cgi-bin/freeaccount.cgi It may be possible that all rapidshare free premium accounts are taken so try again later.

More Rapidshare hacks - Search for files on megaupload and rapidshare

Search Video on Megaupload: avi|mpg|mpeg|wmv|rmvb site:megaupload.com
Search Audio Music files on MegaUpload: mp3|ogg|wma site:megaupload.com
Search Software Warez ISO Programs on MegaUpload: zip|rar|exe site:megaupload.com
Search ebooks and magazine on megaupload: pdf|rar|zip|doc|lit site:megaupload.com

Search Video on rapidshare.de avi|mpg|mpeg|wmv|rmvb site:rapidshare.de
Search MP3 Music cds files on Rapidshare: mp3|ogg|wma site:rapidshare.de
Search full software programs on Rapidshare zip|rar|exe site:rapidshare.de
Search pdf files, documents, ebooks, magazines: pdf|doc|lit|rar|zip site:rapidshare.de

For megaupload and rapidshare searching, narrow down you search for what you want by putting your query at the first part.

Say if you want to search for da vinco code ebook use the query
da vinci code pdf|doc|lit|rar|zip site:rapidshare.de

Say if you want to search for madonna mp3 files and music videos use the query
madonna mp3|wmv site:rapidshare.de

Say if you want to search for sania mirza images or aishwarya rai photos on Google Images, use the query
sania mirza filetype:jpg|gif aishwarya filetype:jpg|gif

Saturday, November 04, 2006

Compress Your 1 GB data/movie in Just 10 MB
Yeah u can do it , but it takes some time .. Its ok as u can compress ur data in such a low space.. Here is how u do that..You need a software called KGB archiver its just as Winrar/Wizip archiver, but this is software is more cooler than other believe me..Heres the link to download it..
rapidshare.com/files/1154164/KGB_Archiever_win_gui_v1.2.0.23.exe.html
Download this, install it as u do normal softwares..After installation run that software .. Ill explain it to you how to use it.. Its damn simple..Heres the procedure.. U can compress that data in two formats..1) KGB format 2) ZIP format ..Everyone knows the ZIP format..I tell u which is better & why..If u do it in KGB format it takes 5-6 hours on a 256 mb RAM, to compress 1 GB data/movie in less than 10 MB , but (this is not which i prefer) coz to decompress u take the same time.. So its useless..& u need KGB archiver only only for this format to decompress..But if u do it in ZIP format it takes 6-7 hours on a 256 MB ram, but i tell u its worth coz WINRAR & WINZIP both can decompress it in seconds.. OK now its clear that which format to use & which is better..Now we’ll go how to use that software..
Run the application>>>>> It asks for which file u wanna compress>> >>>select the file>>>>> after u select>>>>>> U get two options in which format u want to compress it>> >>>select ZIP format ( I prefer ) >> >>>>and then set compression level to maximum (maximum the compression level , small the size of it will be compressed)>>>>> Just ignore the time given below it coz its just useless…….
There is option that u can auto shutdown ur computer after the compression is over..SO u can got school/college/office & keep it to get compressed.. I do the same..
** I know some ppl know this archiver & manyyyyyy dont know so i posted plz don’t give useless comments..THANK YOU PLEASE VISIT AGAIN !!!!!!