random minor PHP questions

bitt3n

Senior member
Dec 27, 2004
202
0
76
I have a few random basic PHP-related questions:

1) I am trying to install the net_geo PEAR module on my ISP, but I cannot figure out how to tell PHP where I put the modules in my directory. I put them in a folder called mylibs at the top level of the directory, and added the line

ini_set( 'include_path', ini_get( 'include_path' ).PATH_SEPARATOR."/mylibs" );

to my header code, before including the files with:

require_once( "cache/lite.php" );
require_once( "net/geo.php" );

and I get the error

Fatal error: main(): Failed opening required 'cache/lite.php' (include_path='.:/usr/lib/php:/usr/local/lib/php:/mylibs')

How do I change the ini_set line to tell PHP where to find the modules?


2) Has anyone used the Net_Geo module for anything, and if so, is it fairly accurate? It seems pretty cool to be able to identify the lat/long coordinates of the user's ISP based on their IP. I noticed on the PEAR site that the module is listed as very out-of-date. I found another module on the site called Net_GeoIP, but apparently you have to pay for the database for that one. On an unrelated note, I am curious how close the lat/long of the user's IP is likely to be to the user's own lat/long. Is it likely to usually be accurate to within 5 miles? 10? 50?


3) Unrelated question: I want to check that the jpeg a user is uploading is exactly 100X100 pixels before moving the upload from the temporary directory to the uploads directory. I know I can get the pixel sizes using getimagesize(), but I cannot figure out how to refer to the temporary directory properly. I cut and pasted the temp directory path from the phpinfo() entry into the function to get the following:

$image_size = getimagesize("C:\WINDOWS\TEMP/$_FILES['upload']");

but that does not work. Can someone tell me what I am doing wrong? (Uploads work fine. I am running PHP locally.)


4) Unrelated question: What is a reasonable maximum number of MYSQL table rows to search, given a standard budget shared hosting ISP and, say, 100 users doing searches at a given time? For example, if I do a search on these forums, I imagine the search must look at several million rows of data. If I pay some web company $10 a month and, say, an average of 100 users are on my site at any one time doing a database search of 4 million rows of text strings, can I reasonably expect it to work? This is speaking very generally, obviously. My knowledge of MySQL is very limited and I am only trying to get my bearings.
 

sunase

Senior member
Nov 28, 2002
551
0
0
3) I always just do this:
$imageInfo = getimagesize($_FILES['upfile']['tmp_name']);

1) The leading slash says you are referring to the root of the file system when used in this context (not the root of the website like in URLs).

Just for a working example, here's what I have in my php.ini (I have one with my script and PHP reads it in addition to the server's main one):
include_path = ".:/usr/lib/php:/usr/local/lib/php:/home/sunase/pear/lib" ;

And here's how I load PEAR stuff in scripts:
require_once 'PHP/Compat.php';

Edit: note my file input has a different name, but that's easy to adjust for.
 

bitt3n

Senior member
Dec 27, 2004
202
0
76
thanks, (3) solved my problem and I am about to try (1)

Meanwhile I have two more basic PHP questions that I have not yet been able to find answers to elsewhere on forums:

1) POST data expiration question:

When I click the 'back' button to go to a page which accepted post data (such as a page on which the user uploads a picture), I get the message in Firefox that "The page you are trying to view contains POST data that has expired from cache. If you resend the data, any action the form carried out... etc" and if I do the same thing in IE, I get a message saying the page has expired. Obviously when I hit the back button, I want to go to the last displayed version of the page (after the data was submitted, eg "thank you for uploading your picture) and not resubmit the data. Is there some way to ensure that the POST data does not expire so that I can do this? So far the answers I have found suggest only hacks to get around this. Isn't there an easy way to do it?

2) MYSQL database question

Users of my system will periodically update its database with their current progress on a task by submitting a form. They can also search the most recent entry each other user has provided to the system. Rarely (1% of time), they will want to search past entries as well as present entries. My options are:

a)Create one table storing the most recent entry for each user, and a second table storing all entries past and present. Then I can do searches on the first table most of the time, and the second table when necessary. This should save lots of time because the first table will be much smaller than the second, and usually its data will be all that is required for the search. However, this means storing the most recent data twice in the db, which seems like bad coding style (?)

b) Create one table storing everything, even though this will mean searches are less efficient, because so many rows will be irrelevant to almost every search.

c) Create one table storing the most recent entry, and another storing older entries, and search across tables. However, I read on a forum that "as far as i know, you can't implement fulltext searching across tables because it requires a fulltext index, and an index cannot be defined on more than one table." Since the 'progress' updates will only be a max of about 300 chars, I could potentially make them varchar to get around this problem. However I have no experience searching across tables and I don't know if this is a bad idea.

Thanks for your help!
 

stndn

Golden Member
Mar 10, 2001
1,886
0
0
For the database (2) question, I'dgo with option b: Create one table storing everything.
There are a few reasons for that:

1.
Have you thought of how to move the recent entries to past entries? Moving them from recent to past, while it is only a one-time process, is also troublesome and might end up creating discrepancies (sp?) between the two tables.

2.
You can always use indexing on the timestamp if you're worried about search performance.

3.
Searching from two tables are more time consuming than searching from one table.


I understand that you said the search rarely happens. But why think of solutions for what rarely happens if you can create general solutions that cover things that happens 99% and 1% of the time?


For the first question regarding include path:
If the ini_set() failed, can you just use full path for your require_once() ?

So, instead of:
require_once ("cache/lite.php");
require_once ("net/geo.php");

You'd have:
require_once ("{$_SERVER['DOCUMENT_ROOT']}/mylibs/cache/lite.php");
require_once ("{$_SERVER['DOCUMENT_ROOT']}/mylibs/net/geo.php");
 

bitt3n

Senior member
Dec 27, 2004
202
0
76
Originally posted by: stndn
For the database (2) question, I'dgo with option b: Create one table storing everything.
There are a few reasons for that:

1.
Have you thought of how to move the recent entries to past entries? Moving them from recent to past, while it is only a one-time process, is also troublesome and might end up creating discrepancies (sp?) between the two tables.

2.
You can always use indexing on the timestamp if you're worried about search performance.

3.
Searching from two tables are more time consuming than searching from one table.


I understand that you said the search rarely happens. But why think of solutions for what rarely happens if you can create general solutions that cover things that happens 99% and 1% of the time?
Thanks for the advice. At the moment I have it designed according to version (a), so that whenever the user posts a new entry, this goes into both the table latest_entries, and the table all_entries. No data needs to be transferred between the two tables, because I always overwrite the last entry in the latest_entry table with the new entry for a given user, and then also insert the new entry to the all_entries table at the same time (without replacing the earlier entry, so I have a historical list).

As for indexing the timestamp, perhaps you can help me understand this better. For this to work, I would index the timestamp, and then do a search on the database of the form "Select from entry_table the most recent entry for each user". However this still requires MYSQL to comb through almost the entire table for each search, until it finds the oldest 'last entry' value in the list. (It seems I could accomplish the same thing by adding a boolean value to each entry in the table corresponding to whether that entry was the last entry for that user.) It seems that the single table solution indexing timestamp would be much more intensive than keeping a separate table for latest entries. With such a table, 99% of the time I would just be listing every entry in the table, versus combing through a table that could be 100X as large to find only the latest entries.

However I am also uncomfortable with the dual-table format because it strikes me as being kind of a hack.

Sorry if I misunderstand your suggestion, and thanks again for your help.