Should I Omit (?>) PHP Closing Tag?

Friday, November 19, 2010
I never thought that the PHP closing tag was optional after years of working with PHP.  Before,  I consider it as a standard to have a closing tag just like the conditionals (if-else) and the loops (for, do-while, etc.) . . . you name it. Closure is enforced. Failing to do so results in a compile-time error.

A year ago, I became part of a project with a senior developer and he gave us a copy of coding standard that he wants.  The guideline was pretty impressive and I realized a lot of things which I can improved when it comes to coding.  Seeing that guideline convinced me that this man is really good. Then I started browsing his codes, then I get to notice that he isn't using the PHP closing tag on the controller and libraries we are using.  We are using Code Igniter. Boggled with this, I asked our boss why is he not using the closing tag at all. That was the time I realized that it is optional when the file is all PHP code and more importantly it prevents problems with the trailing white space.

PHP developers frequently encounter the problem with sending headers and cookies after output has been sent to the browser, but that problem can also happen with unintentional output. If whitespace gets inserted after the end of a PHP code block, that can produce unintentional output when that PHP script is included.

Zend Framework says that "For files that contain only PHP code, the closing tag ("?>") is never permitted. It is not required by PHP, and omitting it´ prevents the accidental injection of trailing white space into the response." Keep in mind that this idea / standard only applies to PHP files that are only for PHP - no HTML.

Kindly Bookmark and Share it:


China Helped 'Hijack' U.S. Internet Traffic, Report Says

Thursday, November 18, 2010
China has done little to improve access to and the security of the Internet in the last year, in some cases lending its support to Web-based attacks on foreign computer systems and tightening its control on the Internet, according to a Wednesday report.

Methods in China for infiltrating American computer systems as well as those of foreign governments have become increasingly sophisticated in the last year, but the average citizen in the country has fewer options when it comes to freedom on the Web, according to the U.S.-China Economic and Security Review Commission (USCC).

The USCC is charged with monitoring the national security implications of doing business with China. Wednesday's report is the 12-member commission's eighth report on the topic since 2000. It covers a variety of topics, including Internet freedoms, and this year's report includes 45 recommendations for Congress to consider.

"The Chinese government continues to maintain a sophisticated Internet filtering system to restrict freedom of speech," the report found. "Beyond filtering, the Chinese government has increasingly sought to direct public discussion over the Internet."
"Moreover, the penetration of Google's computer network this year has renewed concerns about the Chinese government's tolerance or possible sponsorship of malicious computer activity," the USCC continued.
The commission found "some lever of state support" for Chinese individuals and organizations that attacked American computer systems and those of foreign governments, vice chairman Carolyn Bartholomew said in a statement.
"In addition, for a brief period in April, Chinese Internet service providers 'hijacked,' or inappropriately gained access to, U.S. Internet traffic," Bartholomew said. "This incident affected numerous government sites, including those for the Senate and the Office of the Secretary of Defense."

In a statement provided to Reuters, China Telecom denied hijacking U.S. Internet traffic.
Earlier this year, Google discovered a sophisticated attack that originated in China, which was designed to steal Google intellectual property and access the Gmail accounts of Chinese human rights activists. As a result, Google said it would stop censoring its search results in China, and possibly pull out of the Chinese market altogether depending on the reaction of Chinese officials.

In a white paper released earlier this week, Google said that Internet censorship acts as an impediment to free trade; a position the USCC said was persuasively argued.

"In order to successfully reduce restrictions on and disruption of the Internet, governments must focus on three critical areas as they negotiate trade agreements: advancing the unrestricted flow of information; promoting new, stronger transparency rules; and ensuring that Internet services can be provided without a local investment," Google wrote.

"An open Internet has been and remains an absolutely critical component of the new information economy's ability to empower individuals and create shared information markets," Google continued. "Closed systems are antithetical to the Internet's success and will significantly disable its potential to support trade and innovation going forward."

Unfortunately, China does not appear to be moving away from a closed system, the USCC found.
"Our report notes that over the past year China has continued to tighten its control on the Internet, dashing hopes for the Internet to act as a means to liberalize Chinese society," Bartholomew said. "Authorities skillfully balance the need to limit speech and information on the Internet against the Chinese public's desire to participate in discourse about the country's social conditions."

The USCC said China often delegates censorship responsibilities to private companies, like Baidu. The fact that Baidu is "heavily funded by American investors [has] implications for the United States," Bartholomew said.

Google said that governments should honor World Trade Organization obligations "and develop new international rules that provide enhanced protection against these trade barriers of the 21st century."
The USCC said this approach "is particularly important in light of the proliferation of state-based Internet censorship, now employed by some 40 countries—a tenfold increase over the past decade."

Taken From: PCMag

Kindly Bookmark and Share it:


Google's Inevitable Ruin Begins


Google image search and page preview mode are omens to the beginning of the end, trust me.

You can see the beginnings of Google's ruin already, which was  inevitable. Nobody can ever leave things alone even though they are  working just fine. Someone always has to try to fix things when they're  not broken. Once the process begins, it never ends, and it's begun at  Google.

The company has made minor tweaks now and then, no big deal. But the  recent and more aggressive changes have been terrible. The biggest so  far is the change in the way the company handles image search. I think  it was drawn into the current stupidity because Bing  and other search sites make the images page fancy with a little Ajax or  JavaScript to jazz them up, as if users care. What users actually care  about is speed, and none of these enhanced pages do much for that,  especially the Google model.

In the past, when you searched for an image on Google, you got a  single page of thumbnails that pretty much loaded instantly like any  other page. When you saw the image you wanted, you clicked on it and got  it as a stand-alone image and the option to view the page where the  image existed. This was fast, simple, and effective. More importantly  the image thumbnail on the original page had link information that you  could look at. In other words, the location of where the image could be  found. This was handy for creative searching. It also gave you actual  image size information. No more.

Now when you do an image search, Google loads up one massive page  with apparently every image it could find, most of which are not  remotely what you were interested in. Instead of the single page of  thumbnails, you get over 20 pages of large thumbnails all on the same  huge page. And that's all you get. There is no information, just endless  thumbnails. No sizes, no locations to scan, nothing! And it takes  forever to load all these large thumbnails. If you make a mistake in  your query, you'll have to wait and wait until the process is over. And  because thousands of useless large thumbnails are loaded, a huge waste  of user bandwidth takes place each time.

Now you have to place the pointer on top of each image for an  Ajax-like pop-out to happen, which gives you the size and location  information. So instead of being able to scan the entire page with a  simple glance to find an image from say, usgs.gov, you have to put the  cursor on each and every image and  wait for a pop out with the  information.

And if you click on the image to go to the site where the image is  found, it delivers a weird Ajax-like result with the image superimposed  over the containing website. It's not only weird, but has crashed the  browser more than a few times. Did I say weird? I meant, it sucks!

How is this an improvement? It's adding a stupid feature, trying to  show off. The entire image search is now pretty much useless. It takes  at least twice as long to use, and is needlessly fancy. It does not make  the experience easier, it makes it harder. It wastes time and  bandwidth. Welcome to the new Google.
And I see nothing on the search settings page that allows for any  changes. If I have my results settings set to ten results per page, why  do I get thousands of pics? That doesn't seem like ten to me. This  improvement is an abomination—the beginning of the end.

And if you think I'm wrong about this being a new trend at Google,  then all you have to do is read the news from yesterday. Google has  decided to add a page preview mode.  This is another useless feature that someone thinks improves the  experience when it does nothing but show off the fact that someone can  code JavaScript or some other display manipulation language.

The idea here is that you run your pointer over the search results  and a mini page pops up, showing you what the page looks like. A  preview, if you will. I first saw this as an add-on and special feature a  decade ago. It didn't catch on then, so why would it now?

It was cute, but basically stupid. The idea, I suppose, was to remind  you what the page looks like if you are trying to re-find something you  lost track of. Use your history menu for that! This is another  bandwidth hog of an idea that, to be honest, turns out to be an annoying  nuisance.

Does Google think it needs to add these dreadful features because  Bing is breathing down its neck? It must, since the changes to the image  page are right out of Bing. Curiously, the old MSN Live Search did it  better than Bing. Meanwhile Yahoo, which licenses the Bing engine, has  maintained more practical image results and is now superior to both  Google and Bing. Go figure!

It doesn't take a genius to see that Google is beginning to make huge  judgment errors. And I can assure you, this image fiasco isn't a mere  or minor diversion. It's a sign of bad things to come. An omen.

Taken From: PCMag.com

Kindly Bookmark and Share it:


".htaccess" Basics

What is .htaccess?

Wikipedia says: In several web servers (most commonly Apache), .htaccess (hypertext access) is the default name of a directory-level configuration file that allows for decentralized management of web server configuration. The .htaccess file is placed inside the web tree, and is able to override a subset of the server's global configuration; the extent of this subset is defined by the web server administrator. The original purpose of .htaccess was to allow per-directory access control (e.g. requiring a password to access the content), hence the name. Nowadays .htaccess can override many other configuration settings, mostly related to content control, e.g. content type and character set, CGI handlers, etc.

Where or when to use it?

Authorization, authentication
.htaccess files are often used to specify the security restrictions for the particular directory, hence the filename "access". The .htaccess file is often accompanied by a .htpasswd file which stores valid usernames and their passwords.
Rewriting URLs
Servers often use .htaccess to rewrite long, overly comprehensive URLs to shorter and more memorable ones.
Blocking
Use allow/deny to block users by IP address or domain. Also, use to block bad bots, rippers and referrers.
SSI
Enable server-side includes.
Directory listing
Control how the server will react when no specific web page is specified.
Customized error responses
Changing the page that is shown when a server-side error occurs, for example HTTP 404 Not Found.
MIME types
Instruct the server how to treat different varying file types.
Cache Control
.htaccess files allow a server to control caching by web browsers and proxies to reduce bandwidth usage, server load, and perceived lag.

Since .htaccess is a hidden system file please make sure your FTP client is configured to show hidden files. This is usually an option in the program's preferences/options.

How do I start?
1. Create an empty text file using a text editor such as notepad, and save it as htaccess.txt.
NOTE:The reason you should save the file as htaccess.txt is because many operating systems and FTP applications are unable to read or view .htaccess files by default. Once uploaded to the server you can rename the file to .htaccess.
2. Edit the contents of the file. Check the following examples:
  • Point an entire site to a different URL, such as domain.net redirected to domain.com:

    # This allows you to redirect your entire website to any other domain
    Redirect 301 / http://rv8820.blogspot.com/

  • Redirect index.html to a specific subfolder:

    # This allows you to redirect index.html to a specific subfolder
    Redirect /index.html http://rv8820.blogspot.com/newdirectory/

  • Redirect an old file to a new file path:

    # Redirect old file path to new file path
    Redirect /olddirectory/oldfile.html http://rv8820.blogspot.com/newdirectory/newfile.html

  • Redirect to a specific index page:

    # Provide Specific Index Page (Set the default handler)
    DirectoryIndex index.html

  • Redirect users to access the site without www:

    # To redirect all users to access the site WITH the www. prefix,
    # (http://example.com/... will be redirected to http://www.example.com/...)
    # adapt and uncomment the following:

    RewriteEngine On
    RewriteCond %{HTTP_HOST} !^www\.rv8820.blogspot.com\.com$ [NC]
    RewriteRule ^(.*)$ http://www.rv8820.blogspot.com/$1 [L,R=301]

  • Redirect viewers of your domain to use the secure version of your domain:

    # An easy to way to always redirect the user to secure connection (https://) can be accomplished with the following lines:
    RewriteEngine On
    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://www.rv8820.blogspot.com/$1 [R,L]

  • Redirect users to www for http:// and https://

    RewriteEngine On

    RewriteCond %{SERVER_PORT} 80
    RewriteCond %{HTTP_HOST} !^www\.rv8820.blogspot\.net$ [NC]
    RewriteRule ^(.*)$ http://www.rv8820.blogspot.com/$1 [L,R=301]

    RewriteCond %{SERVER_PORT} 443
    RewriteCond %{HTTP_HOST} !^www\.rv8820.blogspot\.com$ [NC]
    RewriteRule ^(.*)$ https://www.rv8820.blogspot.com/$1 [L,R=301]

  • Redirect users to use https:// for a particular folder:

    # In case you wish to force HTTPS for a particular folder you can use the following:
    RewriteEngine On
    RewriteCond %{SERVER_PORT} 80
    RewriteCond %{REQUEST_URI} somefolder
    RewriteRule ^(.*)$ https://www.rv8820.blogspot.com/somefolder/$1 [R,L]


3. Now upload the file to your root folder of your site and be sure to rename it to .htaccess without any filename extension.
4. Make sure that your redirect functions properly.
  • Paths to where you should save this file can be found in this article: System Paths
  • The definitive guide on Apache directives that can be used in .htaccess files can be found here: http://httpd.apache.org/docs/mod/core.html

Kindly Bookmark and Share it: