text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Video capture on Linux? We need to capture live video and display easily on Linux. We need a cheap card or USB device with a simple API. Anyone want to share some experience?
A: Use the video4linux library. I've used it with a c++ program and was able to capture webcam frames within about an hour. (Very easy to use and setup)
A: If you need to program, you're best off using GStreamer, a multimedia framework under Linux.
Cheese, mentioned by jackbravo, is based on GStreamer, as is Flumotion, a streaming server I work on.
A: As mentioned, Use dvgrab to capture from a Firewire interface from the camera, then use tools such as ffmpeg (command line) or kino (simple gui video editor) to process the video as needed. PCI based Firewire cards are relatively inexpensive and easy to find.
Here are some examples:
*
*continuous capture from firewire, autosplit every couple of minutes
dvgrab --size 500 --autosplit <filename>
*watch the camera live
dvgrab - | mplayer -
Be aware that some recent distros (e.g. Fedora8) are using new but half-baked firewire drivers. However, Ubuntu works great.
A: There are "sealed" camera solutions out there with mini-webservers and an ethernet port on the back. Just plug it in to the network, set its IP, and open up a browser... in linux or wherever
If you want to capture in linux, I once had a cheap webcam capturing single frames in a perl script, which could have been modified for real time - though that was about 10 years ago. Anyway, its possible :-/
A: There's the cheese gnome application. Really simple to use. Not too much features, just video capture.
A: openCV will allow you to capture individual frames from a camera and save to disk. If you need to then manipulate these to create a video, I would suggest netpbm, a pretty powerful set of command line tools you can use with some shell scripting to make a video or do whatever it is you need.
A: Another option is to use Firewire (IEEE1394) cameras, such as most common DV camcorders. They tend to work really well and give a lot better video than cheap web cams, and there is a plethora of tools in Linux for working with dv video, such as dvgrab.
A: If you use java, v4l4j makes it very simple to capture frames from any V4L device. It also allows you to control the device from java. I used it with a PTZ webcam (logitech quickam orbit), and I could control usual thigs like brightness, saturation and auto-white balance, but also the tilt and pan of the camera. Very handy !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to update a field with random data? I've got a new varchar(10) field in a database with 1000+ records. I'd like to update the table so I can have random data in the field. I'm looking for a SQL solution.
I know I can use a cursor, but that seems inelegant.
MS-SQL 2000,BTW
A: update MyTable Set RandomFld = CONVERT(varchar(10), NEWID())
A: You might be able to adapt something like this to load a test dataset of values, depending on what you are looking for
A: Additionally, if you are just doing this for testing or one time use I would say that an elegant solution is not really necessary.
A: Why not use the first 10 characters of an md5 checksum of the current timestamp and a random number?
A: Something like (untested code):
UPDATE yourtable
SET yourfield= CHAR(32+ROUND(RAND()*95,0));
Obviously, concatenate more random characters if you want up to ten chars.
It's possible that the query optimizer might set all fields to the same value; in that case, I would try
SET yourfield=LEFT(yourfield,0)+CHAR…
to trick the optimizer into recalculating each time the expression.
A: If this is a one time thing just to get data into the system I really see no issue with using a cursor as much as I hate cursors they do have their place.
A: How about this:
UPDATE TBL SET Field = LEFT( CONVERT(varchar(255), @myid),10)
A: if you are in SQL Server you can use
CAST(RAND() as varchar(10))
EDIT: This will only work inside an iteration. As part of a multi-row insert it will use the same RAND() result for each row.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: log4j log file names? We have several jobs that run concurrently that have to use the same config info for log4j. They are all dumping the logs into one file using the same appender. Is there a way to have each job dynamically name its log file so they stay seperate?
Thanks
Tom
A: If the job names are known ahead of time, you could include the job name when you do the getLogger() call. You then can bind different appenders to different loggers, with separate file names (or other destinations).
If you cannot know the job name ahead of time, you could configure the logger at runtime instead of using a configuration file:
FileAppender appender = new FileAppender();
appender.setFileName(...);
appender.setLayout(...);
Logger logger = Logger.getLogger("com.company.job."+jobName);
logger.addAppender(appender);
A: We have something similar implemented in our system. We store the specific loggers in a HashMap and initialize appenders for each of them as needed.
Here's an example:
public class JobLogger {
private static Hashtable<String, Logger> m_loggers = new Hashtable<String, Logger>();
private static String m_filename = "..."; // Root log directory
public static synchronized void logMessage(String jobName, String message)
{
Logger l = getJobLogger(jobName);
l.info(message);
}
public static synchronized void logException(String jobName, Exception e)
{
Logger l = getJobLogger(partner);
l.info(e.getMessage(), e);
}
private static synchronized Logger getJobLogger(String jobName)
{
Logger logger = m_loggers.get(jobName);
if (logger == null) {
Layout layout = new PatternLayout("...");
logger = Logger.getLogger(jobName);
m_loggers.put(jobName, logger);
logger.setLevel(Level.INFO);
try {
File file = new File(m_filename);
file.mkdirs();
file = new File(m_filename + jobName + ".log");
FileAppender appender = new FileAppender(layout, file.getAbsolutePath(), false);
logger.removeAllAppenders();
logger.addAppender(appender);
}
catch (Exception e)
{ ... }
}
return logger;
}
}
Then to use this in your job you just have to use a one line entry like this:
JobLogger.logMessage(jobName, logMessage);
This will create one log file for each job name and drop it in its own file with that job name in whichever directory you specify.
You can fiddle with other types of appenders and such, as written it will continue appending until the JVM is restarted which may not work if you run the same job on a server that is always up, but this gives the general idea of how it can work.
A: Can you pass a Java system property for each job? If so, you can parameterize like this:
java -Dmy_var=somevalue my.job.Classname
And then in your log4j.properties:
log4j.appender.A.File=${my_var}/A.log
You could populate the Java system property with a value from the host's environment (for example) that would uniquely identify the instance of the job.
A: You can have each job set NDC or MDC and then write an appender that varies the name based on the NDC or MDC value. Creating a new appender isn't too hard. There may also be a appender that will fit the bill in the log4j sandbox. Start looking in http://svn.apache.org/viewvc/logging/log4j/trunk/contribs/
A: You could write your own appender that makes up its own filename, perhaps using the [File.createTempFile](http://java.sun.com/j2se/1.5.0/docs/api/java/io/File.html#createTempFile(java.lang.String,%20java.lang.String)) method. If the FileAppender class was written correctly, you should be able to extend it—or RollingFileAppender—and override the getFile method to return one that you choose based on whatever new properties you would like to add.
A: Building on shadit's answer. If each job can be identified by which class' main method was started you can use the system property sun.java.command that contais the full name of the class started. For instance like this:
log4j.appender.LOGFILE.File=${sun.java.command}.log
I use it together with a TimestampFileAppender like this:
log4j.appender.LOGFILE=TimestampFileAppender
log4j.appender.LOGFILE.TimestampPattern=yyyy_MM_dd__HH_mm
log4j.appender.LOGFILE.File=${sun.java.command}_{timestamp}.log
This way when I'm developing in Eclipse I get a new log file for each new process that I run, identified by the classname of the class with the main method and the time it was started.
A: You could programmatically configure log4j when you initialize the job.
You can also set the log4j.properties file at runtime via a system property. From the manual:
Set the resource string variable to the value of the log4j.configuration system property. The preferred way to specify the default initialization file is through the log4j.configuration system property. In case the system property log4j.configuration is not defined, then set the string variable resource to its default value "log4j.properties".
Assuming you're running the jobs from different java commands, this will enable them to use different log4j.properties files and different filenames for each one.
Without specific knowledge of how your jobs are run it's difficult to say!
A: Tom you coud specify and appenders for each job. Let's that you have 2 jobs corresponding to two different java packages com.tom.firstbatch and com.tom.secondbatch, you would have something like this in log4j.xml :
<category name="com.tom.firstbatch">
<appender-ref ref="FIRST_APPENDER"/>
</category>
<category name="com.tom.secondtbatch">
<appender-ref ref="SECOND_APPENDER"/>
</category>
A: you may implement following:
*
*A ThreadLocal holder for the identity of your job.
*Extend FileAppender, your FileAppender has to keep a Map holding a QuietWriter for every job identity. In method subAppend, you get the identity of your job from the ThreadLocal, you look up (or create) the QuietWriter and write to it...
I may send you some code by mail if you wish...
A: log4j.logger.com.foo.admin=,AdminFileAppender
log4j.logger.com.foo.report=,ReportFileAppender
It's another way to do this task.. here com.foo.admin is the full package name
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Listview Multiple Selection Is there any way to force a listview control to treat all clicks as though they were done through the Control key?
I need to replicate the functionality of using the control key (selecting an item sets and unsets its selection status) in order to allow the user to easily select multiple items at the same time.
Thank you in advance.
A: It's not the standard behaviour of the ListView control, even when MultiSelect is set to true.
If you wanted to create your own custom control you would need to do the following:
*
*Derive a control from ListView
*add a handler to the "Selected" event.
*In the "OnSelected", maintain your own list of selected items.
*If the newly selected item is not in your list, add it. If it is, remove it.
*In code, select all of the items in your list.
Should be simple enough to implement and feel like multi-select without using the control key!
A: Here is a complete solution that is a modification of the solution provided by Matthew M. above.
It offers an improvement as well as a bit of added functionality.
Improvement:
*
*left clicking the control gives focus to the control.
*right mouse click behaviour is consistent (single selection)
Added functionality:
*
*the control has a property (MultiSelectionLimit) that allows you to put a limit on how many items can be selected at once.
After my first posting I realised a minor problem with the code. Clearing multiple selections would lead to the ItemSelectionChanged event being invoked multiple times.
I could find no way to avoid this with the current inheritance, so instead I adopted a solution where the bool property SelectionsBeingCleared will be true while until all selected items have been deselected.
This way a simple call to that property will make it possible to avoid updating effects until all of the multiple selections have been cleared.
public class ListViewMultiSelect : ListView
{
public const int WM_LBUTTONDOWN = 0x0201;
public const int WM_RBUTTONDOWN = 0x0204;
private bool _selectionsBeingCleared;
/// <summary>
/// Returns a boolean indicating if multiple items are being deselected.
/// </summary>
/// <remarks> This value can be used to avoid updating through events before all deselections have been carried out.</remarks>
public bool SelectionsBeingCleared
{
get
{
return this._selectionsBeingCleared;
}
private set
{
this._selectionsBeingCleared = value;
}
}
private int _multiSelectionLimit;
/// <summary>
/// The limit to how many items that can be selected simultaneously. Set value to zero for unlimited selections.
/// </summary>
public int MultiSelectionLimit
{
get
{
return this._multiSelectionLimit;
}
set
{
this._multiSelectionLimit = Math.Max(value, 0);
}
}
public ListViewMultiSelect()
{
this.ItemSelectionChanged += this.multiSelectionListView_ItemSelectionChanged;
}
public ListViewMultiSelect(int selectionsLimit)
: this()
{
this.MultiSelectionLimit = selectionsLimit;
}
private void multiSelectionListView_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e)
{
if (e.IsSelected)
{
if (this.MultiSelectionLimit > 0 && this.SelectedItems.Count > this.MultiSelectionLimit)
{
this._selectionsBeingCleared = true;
List<ListViewItem> itemsToDeselect = this.SelectedItems.Cast<ListViewItem>().Except(new ListViewItem[] { e.Item }).ToList();
foreach (ListViewItem item in itemsToDeselect.Skip(1)) {
item.Selected = false;
}
this._selectionsBeingCleared = false;
itemsToDeselect[0].Selected = false;
}
}
}
protected override void WndProc(ref Message m)
{
switch (m.Msg)
{
case WM_LBUTTONDOWN:
if (this.SelectedItems.Count == 0 || !this.MultiSelect) { break; }
if (this.MultiSelectionLimit > 0 && this.SelectedItems.Count > this.MultiSelectionLimit) { this.ClearSelections(); }
int x = (m.LParam.ToInt32() & 0xffff);
int y = (m.LParam.ToInt32() >> 16) & 0xffff;
ListViewHitTestInfo hitTest = this.HitTest(x, y);
if (hitTest != null && hitTest.Item != null) { hitTest.Item.Selected = !hitTest.Item.Selected; }
this.Focus();
return;
case WM_RBUTTONDOWN:
if (this.SelectedItems.Count > 0) { this.ClearSelections(); }
break;
}
base.WndProc(ref m);
}
private void ClearSelections()
{
this._selectionsBeingCleared = true;
SelectedListViewItemCollection itemsToDeselect = this.SelectedItems;
foreach (ListViewItem item in itemsToDeselect.Cast<ListViewItem>().Skip(1)) {
item.Selected = false;
}
this._selectionsBeingCleared = false;
this.SelectedItems.Clear();
}
}
A: You might want to also consider using Checkboxes on the list view. It's an obvious way to communicate the multi-select concept to your average user who may not know about Ctrl+Click.
From the MSDN page:
The CheckBoxes property offers a way to select multiple items in the ListView control without using the CTRL key. Depending on your application, using check boxes to select items rather than the standard multiple selection method may be easier for the user. Even if the MultiSelect property of the ListView control is set to false, you can still display checkboxes and provide multiple selection capabilities to the user. This feature can be useful if you do not want multiple items to be selected yet still want to allow the user to choose multiple items from the list to perform an operation within your application.
A: Here is the complete solution that I used to solve this problem using WndProc. Basically, it does a hit test when the mouse is clicked.. then if MutliSelect is on, it will automatically toggle the item on/off [.Selected] and not worry about maintaining any other lists or messing with the ListView functionality.
I haven't tested this in all scenarios, ... it worked for me. YMMV.
public class MultiSelectNoCTRLKeyListView : ListView {
public MultiSelectNoCTRLKeyListView() {
}
public const int WM_LBUTTONDOWN = 0x0201;
protected override void WndProc(ref Message m) {
switch (m.Msg) {
case WM_LBUTTONDOWN:
if (!this.MultiSelect)
break;
int x = (m.LParam.ToInt32() & 0xffff);
int y = (m.LParam.ToInt32() >> 16) & 0xffff;
var hitTest = this.HitTest(x, y);
if (hitTest != null && hitTest.Item != null)
hitTest.Item.Selected = !hitTest.Item.Selected;
return;
}
base.WndProc(ref m);
}
}
A: Drill down through ListviewItemCollection and you can set the Selected property for individual items to true. This will, I believe, emulate the "multi-select" feature that you are trying to reproduce. (Also, as the above commenter mentioned, be sure to have the MultiSelect property of the lisetview set to true.)
A: Just in case anyone else has searched for and found this article, the accepted solution is no longer valid. (in fact I am not sure it ever was). In order to do what you want (select multiple without a modifier key) simply set the list view selection type to be multiple, rather than extended. Multiple selects one item after another when clicked, and extended requires the modifier key to be pressed first.
A: The Ctrl+Click behavior is as implemented by the browser, and has little to do with the actual .NET Control. The result you're trying to achieve can be acquired with a lot of additional JavaScript - the easiest way would probably be to build a JavaScript control from default that works this way, rather than trying to hack up the listview. Would this be desirable? In that case I could look into it and get back to you with a solution.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Php $_GET issue foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo "$_GET[$field]";
echo "<br>";
}
print_r($datarray);
This is the output I am getting. I see the data is there in datarray but when
I echo $_GET[$field]
I only get "Array"
But print_r($datarray) prints all the data. Any idea how I pull those values?
OUTPUT
Array (
[0] => Array (
[0] => Grade1
[1] => ln
[2] => North America
[3] => yuiyyu
[4] => iuy
[5] => uiyui
[6] => yui
[7] => uiy
[8] => 0:0:5
)
)
A: Use var_export($_GET) to more easily see what kind of array you are getting.
From the output of your script I can see that you have multiple nested arrays. It seems to be something like:
$_GET = array( array( array("Grade1", "ln", "North America", "yuiyyu", "iuy", "uiyui", "yui","uiy","0:0:5")))
so to get those variables out you need something like:
echo $_GET[0][0][0]; // => "Grade1"
A: EDIT: When I completed your test, here was the final URL:
http://hofstrateach.org/Roberto/process.php?keys=Grade1&keys=Nathan&keys=North%20America&keys=5&keys=3&keys=no&keys=foo&keys=blat&keys=0%3A0%3A24
This is probably a malformed URL. When you pass duplicate keys in a query, PHP makes them an array. The above URL should probably be something like:
http://hofstrateach.org/Roberto/process.php?grade=Grade1&schoolname=Nathan®ion=North%20America&answer[]=5&answer[]=3&answer[]=no&answer[]=foo&answer[]=blat&time=0%3A0%3A24
This will create individual entries for most of the fields, and make $_GET['answer'] be an array of the answers provided by the user.
Bottom line: fix your Flash file.
A: Use <pre> tags before print_r, then you will have a tree printed (or just look at the source. From this point you will have a clear understanding of how your array is and will be able to pull the value you want.
I suggest further reading on $_GET variable and arrays, for a better understanding of its values
A: calling echo on an array will always output "Array".
print_r (from the PHP manual) prints human-readable information about a variable.
A: Try this:
foreach ($_GET as $field => $label)
{
$datarray[]=$_GET[$field];
echo $_GET[$field]; // you don't really need quotes
echo "With quotes: {$_GET[$field]}"; // but if you want to use them
echo $field; // this is really the same thing as echo $_GET[$field], so
if($label == $_GET[$field]) {
echo "Should always be true<br>";
}
echo "<br>";
}
print_r($datarray);
A: It's printing just "Array" because when you say
echo "$_GET[$field]";
PHP can't know that you mean $_GET element $field, it sees it as you wanting to print variable $_GET. So, it tries to print it, and of course it's an Array, so that's what you get. Generally, when you want to echo an array element, you'd do it like this:
echo "The foo element of get is: {$_GET['foo']}";
The curly brackets tell PHP that the whole thing is a variable that needs to be interpreted; otherwise it will assume the variable name is $_GET by itself.
In your case though you don't need that, what you need is:
foreach ($_GET as $field => $label)
{
$datarray[] = $label;
}
and if you want to print it, just do
echo $label; // or $_GET[$field], but that's kind of pointless.
The problem was not with your flash file, change it back to how it was; you know it was correct because your $dataarray variable contained all the data. Why do you want to extract data from $_GET into another array anyway?
A: Perhaps the GET variables are arrays themselves? i.e. http://site.com?var[]=1&var[]=2
A: It looks like your GET argument is itself an array. It would be helpful to have the input as well as the output.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Do I have a gcc optimization bug or a C code problem? Test the following code:
#include <stdio.h>
#include <stdlib.h>
main()
{
const char *yytext="0";
const float f=(float)atof(yytext);
size_t t = *((size_t*)&f);
printf("t should be 0 but is %d\n", t);
}
Compile it with:
gcc -O3 test.c
The GOOD output should be:
"t should be 0 but is 0"
But with my gcc 4.1.3, I have:
"t should be 0 but is -1209357172"
A: In the C99 standard, this is covered by the following rule in 6.5-7:
An object shall have its stored value accessed only by an lvalue expression that has one of
the following types:73)
*
*a type compatible with the effective type of the object,
*a qualified version of a type compatible with the effective type of the object,
*a type that is the signed or unsigned type corresponding to the effective type of the
object,
*a type that is the signed or unsigned type corresponding to a qualified version of the
effective type of the object,
*an aggregate or union type that includes one of the aforementioned types among its
members (including, recursively, a member of a subaggregate or contained union), or
*a character type.
The last item is why casting first to a (char*) works.
A: This is no longer allowed according to C99 rules on pointer aliasing. Pointers of two different types cannot point to the same location in memory. The exceptions to this rule are void and char pointers.
So in your code where you are casting to a pointer of size_t, the compiler can choose to ignore this. If you want to get the float value as a size_t, just assign it and the float will be cast (truncated not rounded) as such:
size_t size = (size_t)(f); // this works
This is commonly reported as a bug, but in fact really is a feature that allows optimizers to work more efficiently.
In gcc you can disable this with a compiler switch. I beleive -fno_strict_aliasing.
A: It is bad C code :-)
The problematic part is that you access one object of type float by casting it to an integer pointer and dereferencing it.
This breaks the aliasing rule. The compiler is free to assume that pointers to different types such as float or int don't overlap in memory. You've done exactly that.
What the compiler sees is that you calculate something, store it in the float f and never access it anymore. Most likely the compiler has removed part of the code and the assignment has never happend.
The dereferencing via your size_t pointer will in this case return some uninitialized garbage from the stack.
You can do two things to work-around this:
*
*use a union with a float and a size_t member and do the casting via type punning. Not nice but works.
*use memcopy to copy the contents of f into your size_t. The compiler is smart enough to detect and optimize this case.
A: Why would you think that t should be 0?
Or, more accuractely phrased, "Why would you think that the binary representation of a floating point zero would be the same as the binary representation of an integer zero?"
A: Use the compiler flag -fno-strict-aliasing.
With strict aliasing enabled, as it is by default for at least -O3, in the line:
size_t t = *((size_t*)&f);
the compiler assumes that the size_t* does NOT point to the same memory area as the float*. As far as I know, this is standards-compliant behaviour (adherence with strict aliasing rules in the ANSI standard start around gcc-4, as Thomas Kammeyer pointed out).
If I recall correctly, you can use an intermediate cast to char* to get around this. (compiler assumes char* can alias anything)
In other words, try this (can't test it myself right now but I think it will work):
size_t t = *((size_t*)(char*)&f);
A: This is bad C code. Your cast breaks C aliasing rules, and the optimiser is free do things that break this code. You will probably find that GCC has cheduled the size_t read before the floating-point write (to hide fp pipeline latency).
You can set the -fno-strict-aliasing switch, or use a union or a reinterpret_cast to reinterpret the value in a standards-compliant way.
A: Aside the pointer alignments, you're expecting that sizeof(size_t)==sizeof(float). I don't think it is (on 64-bit Linux size_t should be 64 bits but float 32 bits), meaning your code will read something uninitialized.
A: I tested your code with:
"i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465)"
and there was no Problem.
Output:
t should be 0 but is 0
So there isn't a bug in you code. That doesn't mean that it is good code.
But I would add the returntype of the main-function and the "return 0;" at the end of the function.
A: -O3 is not deemed "sane", -O2 is generally the upper threshold except maybe for some multimedia apps.
Some apps can't even go that far, and die if you go beyond -O1 .
If you have a new enough GCC ( I'm on 4.3 here ), it may support this command
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
If you're careful, you'll possibly be able to go through that list and find the given singular optimization you're enabling which causes this bug.
From man gcc :
The output is sensitive to the effects of previous command line options, so for example it is possible to find out which
optimizations are enabled at -O2 by using:
-O2 --help=optimizers
Alternatively you can discover which binary optimizations are enabled by -O3 by using:
gcc -c -Q -O3 --help=optimizers > /tmp/O3-opts
gcc -c -Q -O2 --help=optimizers > /tmp/O2-opts
diff /tmp/O2-opts /tmp/O3-opts | grep enabled
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How do I do a manual uninstall of Oracle? Sometimes my Oracle database on Windows gets hosed. How do I do a manual uninstall of Oracle?
A: The six-step process to remove all things Oracle from a Windows machine:
A. Delete the Oracle services:
In the registry, go to
\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
and delete everything that starts with "Oracle"
B. Remove other Oracle stuff from the registry:
Go to \HKEY_LOCAL_MACHINE\SOFTWARE\ and delete the key ORACLE
C. Reboot
D. Delete all the Oracle software from the directories where you installed it
E. Delete the Oracle software inventory:
Delete the directory C:\Program Files\Oracle. You must do this no matter where you installed your Oracle software - the Oracle installer automatically writes information here.
F. Delete all shortcuts from your Start menu.
G. Remove the Oracle directories from PATH Environment Variable.
To simplify cleanup in the future, I'd strongly recommend you install your Oracle products in one or more virtual machines.
A: Have a look at:
http://www.oracle-base.com/articles/misc/ManualOracleUninstall.php
Basically, it comes down to:
Remove all you can with the installer.
Remove Oracle keys from the registry.
Remove the Oracle directories from your computer.
With (of course) the requisite reboots thrown in as required ;-)
A: It's worth noting that there is an official Oracle standalone deinstaller: https://docs.oracle.com/cd/E11882_01/install.112/e47689/remove_oracle_sw.htm#LADBI1332, which I just used to uninstall Oracle 11 client. This is not necessarily better or easier to use than the top suggestion on this page, but it is "official".
One thing to note - if you use the official deinstaller, it does not like the temp folder to have spaces in it. So if you have it set to "Documents and Settings...\temp" it will fail. Use the control panel environment settings button to SET the TEMP folder first.
A: Uninstall Oracle 10g from window 7, Xp
step 1 : Open up the start menu and in program files look for oracle – oraDb10g_home folder, and select oracle installation products – > Universal Installer.
step 2 : Select Deinstall Product, which will pop up new window , select check box oracleDb10g_home1 as shown below. Click on remove button. This will remove oracle.
step 3 : Remove the registration file from Regedit, in order to remove oracle 10g completely. Run Regedit.
Delete the following keys if it exits after the un-installation.
HKEY_CURRENT_USER\SOFTWARE\ORACLE HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\Oracle.oracle
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OracleDBConsole
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Oracle10g_home
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OraclService
step 4 : Now delete the folder where you have installed the software. By default, it is installed in c drive as C:\oracle and from C:\programs files\oracle.
Hence by doing this steps successfully, Oracle 10g is removed completely. If you are having any problem in removing or uninstalling the program,(oracle ) then do comment below, we will look on that.
A: The tips for using a VM enviroment is the best: no worries about deinstalling. Just install a complete Oracle enviroment and after one succesfull run: winrar the VM ... after corrupting the Oracle home once again: just delete the current VM and unrar the backup
A: This seems way too simple, but in Windows I was able to uninstall Oracle by going into Settings > Apps and Features finding the Oracle database clicking it and then uninstall. I didn't even need a password.
A: Assuming a unix type OS and that you properly installed it using an account named oracle...
find / -user oracle -exec rm -fr {} \;
That having been said, this must be done as root and you had better not mind loss of any and all files that belong to oracle. There will be no... NO recovery from this method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Log4net with SyslogAppender, 1kb message limit Has anyone found a way to get around this? Or a better technique to conglomerate logging from multiple web servers reliably?
Any ideas on good log4net log file analysis tools too (plain text not XML) - apart from good 'ol grep of course :)
A: I read about logFaces on another question, or you could use a socket appender and write your own server. logFaces describes itself as a "Log server, aggregator & viewer" but I'm yet to try it.
A: The database-based appenders are great for collecting logs from multiple servers.
A: The 1024 byte limit is part of the syslog RFC (section 4.1), as is UDP transport which doesn't have guaranteed delivery (in case you worry about log lines lost in the ether). I think syslog-ng can solve both these issues, but I'm not a syslog expert.
A: The limitation is imposed by the syslog itself, not the appender.
I do not know about log4net, but NLog works perfectly ok with "shared" file target - i.e. multiple processes can write in one and the same file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I provide dynamic CSS styles or custom theme for web site? There are plenty of ways to provide a dynamic style/theme for a web site, but I am looking for some help on some best practices or techniques that have worked well for others.
I am creating a web site that needs to provide the ability for customers to create or specify their own colors, style, theme, or layout. I'm not convinced how much flexibility I need yet, but basically I need to provide Branding capabilities.
I will be using ASP.NET, and am open to any ideas that will fit within the ASP.NET framework.
A: Using Themes for ASP.NET 2 and greater will provide you everything you need for this.
A: Best way to handle it would be to make a nice CSS document that will specify all the areas that you would like to offer customization, such as header background image, background and text colors, etc. Then build application code to allow specification of which theme to load, and bring up that CSS file.
A: I'd personally go for a CSS-based solution.
You could define the elements' IDs and CSS classes for each page in the web application, so that customers can provide their own set of CSS files.
This approach is platform-agnostic, so that the developer who creates the custom themes is not forced to fit into the ASP.NET themes model - she might as well be a web designer with no programming knowledge.
A: Themes might be a good solution but having re-read your question I think you might be asking for a method for allowing customers to submit their own branding dynamically, i.e. without you having to modify any files, a hands-off approach? How about having an admin interface consisting of web forms where the customer can upload images and CSS themselves? You could then retrieve that content using a HttpHandler or similar.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why isn't the 'len' function inherited by dictionaries and lists in Python example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
A: This way fits in better with the rest of the language. The convention in python is that you add __foo__ special methods to objects to make them have certain capabilities (rather than e.g. deriving from a specific base class). For example, an object is
*
*callable if it has a __call__ method
*iterable if it has an __iter__ method,
*supports access with [] if it has __getitem__ and __setitem__.
*...
One of these special methods is __len__ which makes it have a length accessible with len().
A: Guido's explanation is here:
First of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:
(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.
(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.
Saying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that. /…/
A: Maybe you're looking for __len__. If that method exists, then len(a) calls it:
>>> class Spam:
... def __len__(self): return 3
...
>>> s = Spam()
>>> len(s)
3
A: Well, there actually is a length method, it is just hidden:
>>> a_list = [1, 2, 3]
>>> a_list.__len__()
3
The len() built-in function appears to be simply a wrapper for a call to the hidden len() method of the object.
Not sure why they made the decision to implement things this way though.
A: there is some good info below on why certain things are functions and other are methods. It does indeed cause some inconsistencies in the language.
http://mail.python.org/pipermail/python-dev/2008-January/076612.html
A: The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.
The idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make x + y work for your own class, you write a __add__ method. To make sure that int(spam) properly converts your custom class, write a __int__ method. To make sure that len(foo) does something sensible, write a __len__ method.
This is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling spam.to_i directly instead of saying int(spam).
You're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, len(silly_walks) isn't any more onerous than silly_walks.len(), and Guido has said that he actually prefers it (http://mail.python.org/pipermail/python-3000/2006-November/004643.html).
A: It just isn't.
You can, however, do:
>>> [1,2,3].__len__()
3
Adding a __len__() method to a class is what makes the len() magic work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Is it the filename or the whole URL used as a key in browser caches? It's common to want browsers to cache resources - JavaScript, CSS, images, etc. until there is a new version available, and then ensure that the browser fetches and caches the new version instead.
One solution is to embed a version number in the resource's filename, but will placing the resources to be managed in this way in a directory with a revision number in it do the same thing? Is the whole URL to the file used as a key in the browser's cache, or is it just the filename itself and some meta-data?
If my code changes from fetching /r20/example.js to /r21/example.js, can I be sure that revision 20 of example.js was cached, but now revision 21 has been fetched instead and it is now cached?
A: Browser cache key is a combination of the request method and resource URI. URI consists of scheme, authority, path, query, and fragment.
Relevant excerpt from HTTP 1.1 specification:
The primary cache key consists of the request method and target URI.
However, since HTTP caches in common use today are typically limited
to caching responses to GET, many caches simply decline other methods
and use only the URI as the primary cache key.
Relevant excerpt from URI specification:
The generic URI syntax consists of a hierarchical sequence of
components referred to as the scheme, authority, path, query, and
fragment.
URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
hier-part = "//" authority path-abempty
/ path-absolute
/ path-rootless
/ path-empty
A: Yes, any change in any part of the URL (excluding HTTP and HTTPS protocols changes) is interpreted as a different resource by the browser (and any intermediary proxies), and will thus result in a separate entity in the browser-cache.
Update:
The claim in this ThinkVitamin article that Opera and Safari/Webkit browsers don't cache URLs with ?query=strings is false.
Adding a version number parameter to a URL is a perfectly acceptable way to do cache-busting.
What may have confused the author of the ThinkVitamin article is the fact that hitting Enter in the address/location bar in Safari and Opera results in different behavior for URLs with query string in them.
However, (and this is the important part!) Opera and Safari behave just like IE and Firefox when it comes to caching embedded/linked images and stylesheets and scripts in web pages - regardless of whether they have "?" characters in their URLs. (This can be verified with a simple test on a normal Apache server.)
(I would have commented on the currently accepted answer if I had the reputation to do it. :-)
A: I am 99.99999% sure that it is the entire url that is used to cache resources in a browser, so your url scheme should work out fine.
A: The MINIMUM you need to identify an HTTP object is by the full path, including any query-string parameters. Some browsers may not cache objects with a query string but that has nothing to do with the key to the cache.
It is also important to remember that the the path is no longer sufficient. The Vary: header in the HTTP response alerts the browser (or proxy server, etc.) of anything OTHER than the URL which should be used to determine the cache key, such as cookies, encoding values, etc.
To your basic question, yes, changing the URL of the .js file is sufficent. TO the larger question of what determines the cache key, it's the URL plus the Vary: header restrictions.
A: depends. it is supposed to be the full URL, but some browsers (Opera, Safari2) apply a different cache strategy for urls with different params.
best bet is to change the name of the file.
There is a very clever solution here (uses PHP, Apache)
http://verens.com/archives/2008/04/09/javascript-cache-problem-solved/
Strategy notes:
“According the letter of the HTTP caching specification, user agents should never cache URLs with query strings. While Internet Explorer and Firefox ignore this, Opera and Safari don’t - to make sure all user agents can cache your resources, we need to keep query strings out of their URLs.”
http://www.thinkvitamin.com/features/webapps/serving-javascript-fast
A: Yes. A different path is the same from the caches perspective.
A: Of course it has to use the whole path '/r20/example.js' vs '/r21/example.js' could be completely different images to begin with. What you suggest is a viable way to handle version control.
A: Entire url. I've seen a strange behavior in a few older browsers where case sensitivity came into play.
A: In addition to the existing answers I just want to add that it might not apply if you use ServiceWorkers or e.g offline-plugin. Then you could experience different cache rules depending on how the ServiceWorkers are set up.
A: In most browsers the full url is used.
In some browsers, if you have a query in the url, the document will never be cached.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How do I change the IP address on Oracle 10g What steps do I need to take to change an IP address for Oracle 10g? I cannot connect to the database after going from a dhcp address to a static IP and a reboot.
A: If the server's IP address changed, these are the first things I would look at:
The TNSNAMES.ORA file on the client -- does it have the IP address hardcoded? If so, change it. Does it use the machine name? If so, does the machine name resolve to the correct IP address on your client machine?
The LISTENER.ORA file on the server -- does it explicitly specify the old IP address as its listening address?
A: More info please. Do you mean that you have changed the ip address of the host that the database is on and now you have to connect to it from a different macine, or are you having trouble starting the database after the ip change?
... and what error message do you receive?
A: Most obvious files to check are:
$ORACLE_HOME/network/admin/tnsnames.ora
$ORACLE_HOME/network/admin/listener.ora
Other than that we'd need more info...
*
*I presume you mean the Oracle 10g DB and not the Oracle 10g Application Server?
*Does the database start ok?
*Is there anything in the database alert log?
*Are the error(s) connecting from a client or the server?
*What error message(s) do you get?
*Can you ping the machine on it's new address (by both name + IP address) From both client + server?
*Does a TNSPING work?
*Can you connect using SQL*Plus on the server?
*What other tool(s) have you tried connecting with?
Update after comment
Please can you post...
*
*Your old ip address (if you know it)
*Your new ip address
*Your FQDN (e.g. machine.domain.com)
*The output of "ipconfig/all" (or equivalent)
*Your listener.ora file
*The output of "$ORACLE_HOME/bin/lsnrctl start"
*The output of "$ORACLE_HOME/bin/lsnrctl status"
A: Check that LOCAL_LISTENER is not defined (or defined correctly) in the database - it may not be registering correctly because of an incorrect entry here. Also try 'ALTER SYSTEM REGISTER' to attempt to register with the listener (rather than waiting up to 3 minutes for an auto-register). Examine the listener.log to see the instance registered (service_update * ) and 'lsnrctl status' to see if it is there.
A: Did you change the hostname in DNS? Can you ping the hostname from another machine?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/83991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Curl command line for consuming webServices? Do you guys know how I can use the Curl command line to POST SOAP to test a web service?
I have a file (soap.xml) which has all the soap message attached to it I just don't seem to be able to properly post it.
Thanks!
A: curl -H "Content-Type: text/xml; charset=utf-8" \
-H "SOAPAction:" \
-d @soap.txt -X POST http://someurl
A: Posting a string:
curl -d "String to post" "http://www.example.com/target"
Posting the contents of a file:
curl -d @soap.xml "http://www.example.com/target"
A: If you want a fluffier interface than the terminal, http://hurl.it/ is awesome.
A: For a SOAP 1.2 Webservice, I normally use
curl --header "content-type: application/soap+xml" --data @filetopost.xml http://domain/path
A: Wrong.
That doesn't work for me.
For me this one works:
curl
-H 'SOAPACTION: "urn:samsung.com:service:MainTVAgent2:1#CheckPIN"'
-X POST
-H 'Content-type: text/xml'
-d @/tmp/pinrequest.xml
192.168.1.5:52235/MainTVServer2/control/MainTVAgent2
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: ReSharper sluggishness I like ReSharper, but it is a total memory hog. It can quickly swell up and consume a half-gig of RAM without too much effort and bog down the IDE. Does anybody know of any way to configure it to be not as slow?
A: The next release 4.5 is going to based around performance and memory footprint.
see Ilya Ryzhenkov's blog
Resharper 4.5 has been released
From my experience it is less of a memory hog, but i still can run out of memory.
A: I had an issue where it was taking upwards of 10 minutes to load a solution of 100+ projects. Once loaded VS performance would be ok, though it would oddly flutter back and forth between ok and really bad.
The short answer: Eliminating Resharper warnings seems to improve overall VS/R# performance.
The biggest problem ultimately was that we had a number of files of binary data (encrypted stuff) being included as embedded resources, which happened to have .xml extensions. Resharper was trying really really hard to analyze those files. Eventually it'd get through but would generate 100K+ errors in the process. Changing the extension to one Resharper did not automatically analyze (.bin in this case) solved the problem.
We still have about 10 files which when they or a file they depend on is edited performance tanks for a while. These files are the partial parts of a single class definition where each file averages 3000 LOC. Yes, that's right, it's about a 30K line class. It also happens to be rather poor code for other reasons, many of which Resharper flags making the right hand gutter bar practically a solid orange line. Editing often causes Resharper to reanalyze the whole thing. While that analysis runs, performance is noticeably affected.
I've come to the conclusion that the less errors/warnings there are for R# to identify, the better it performs. My anecdotal evidence gathered while cleaning up/refactoring this project seems to support it.
A lot of folks complain of perf problems with Resharper. If you have even a few big ugly code files with lots of Resharper warnings, then a little time spent cleaning that code up might yield better performance overall. It has for us.
A: Not sure how big your solutions are, but I stopped using 4.5 for the same reasons I stopped using all previous versions, memory usage.
Code analysis and unit test support was the main reason I bought it, turning it off means the rationale for using it is gone.
Workstation has 4GB of memory, and I can easily kill it with ReSharper when running our end-to-end stack in debuggers.
A: You can look how much memory ReSharper use.
ReSharper -> General -> Show managed memory usege in status bar.
A: If you are working on large source files, Resharper does get sluggish (I'm working on version 5.0 at the time of writing this).
You can view the memory usage of Resharper by clicking on Resharper options -> General -> Show memory use in status bar.
When I first did this, I noticed Resharper had clocked up hundreds of megabytes of memory usage! However, the next step worked for me in (temporarily) fixing the slugishness:
Right click the memory usage, and select "Collect garbage" - this seemed to fix the slugishness for me straight away.
A: Turn off the on-the-fly compilation (which, unfortunately, is one of its best features)
A: Regarding memory hogging - I've found that my VS2008 memory footprint grows every time I close one solution and open another. This is true even if I close a solution and re-open that same solution.
A: The new ReSharper 4.5 works a lot better than the previous 4.x releases. I would recommend you try that one.
A: In previous versions I had the same problem, when 4.0 came out these problems have seemed to have gone away. Now with 4.1 i do not feel the huge slow down i used to have. My IDE does not freeze up anymore.
have you tried upgrading ?
A: Try the 4.5 beta. 4.1 was killing my 2GB dev machine, but it's back to running incredibly smoothly with the beta. Others have had the opposite experience, though, so YMMV.
A: Yes, 4.5 works much better. My understanding is that 4.5 was to address the performance issues.
A: Me and my colleagues are also having huge performance issues with ReSharper, just now my ReSharper took 1.1GB of memory. Visual Studio slows down specially when writing JavaScript, it's unbearable. You can turn of the on the fly compilation, but it's the best feature it has...
edit: Everybody in this thread seems to have ReShaprper 4.x, my version is 6.0.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: What is the quickest way to find the shortest cartesian distance between two polygons I have 1 red polygon say and 50 randomly placed blue polygons - they are situated in geographical 2D space. What is the quickest/speediest algorithim to find the the shortest distance between a red polygon and its nearest blue polygon?
Bear in mind that it is not a simple case of taking the points that make up the vertices of the polygon as values to test for distance as they may not necessarily be the closest points.
So in the end - the answer should give back the closest blue polygon to the singular red one.
This is harder than it sounds!
A: You might be able to reduce the problem, and then do an intensive search on a small set.
Process each polygon first by finding:
*
*Center of polygon
*Maximum radius of polygon (i.e., point on edge/surface/vertex of the polygon furthest from the defined center)
Now you can collect, say, the 5-10 closest polygons to the red one (find the distance center to center, subtract the radius, sort the list and take the top 5) and then do a much more exhaustive routine.
A: For polygon shapes with a reasonable number of boundary points such as in a GIS or games application it might be quicker easier to do a series of tests.
For each vertex in the red polygon compute the distance to each vertex in the blue polygons and find the closest (hint, compare distance^2 so you don't need the sqrt() )
Find the closest, then check the vertex on each side of the found red and blue vertex to decide which line segments are closest and then find the closest approach between two line segments.
See http://local.wasp.uwa.edu.au/~pbourke/geometry/lineline3d/ (it's easy to simply for the 2d case)
A: Maybe the Frechet Distance is what your looking for?
Computing the Fréchet distance between two polygonal curves
Computing the Fréchet Distance Between Simple Polygons
A: This screening technique is intended to reduce the number of distance computations you need to perform in the average case, without compromising the accuracy of the result. It works on convex and concave polygons.
Find the the minimum distance between each pair of vertexes such that one is a red vertex and one is a blue. Call it r. The distance between the polygons is at most r. Construct a new region from the red polygon where each line segment is moved outward by r and is joined to its neighbors by an arc of radius r is centered at the vertex. Find the distance from each vertex inside this region to every line segment of the opposite color that intersects this region.
Of course you could add an approximate method such as bounding boxes to quickly determine which of the blue polygons can't possibly intersect with the red region.
A: I know you said "the shortest distance" but you really meant the optimal solution or a "good/very good" solution is fine for your problem?
Because if you need to find the optimal solution, you have to calculate the distance between all of your source and destination poligon bounds (not only vertexes). If you are in 3D space then each bound is a plane. That can be a big problem (O(n^2)) depending on how many vertexes you have.
So if you have vertex count that makes that squares to a scarry number AND a "good/very good" solution is fine for you, go for a heuristic solution or approximation.
A: You might want to look at Voronoi Culling. Paper and video here:
http://www.cs.unc.edu/~geom/DVD/
A: I would start by bounding all the polygons by a bounding circle and then finding an upper bound of the minimal distance.
Then i would simply check the edges of all blue polygons whose lower bound of distance is lower than the upper bound of minimal distance against all the edges of the red polygon.
upper bound of min distance = min {distance(red's center, current blue's center) + current blue's radius}
for every blue polygon where distance(red's center, current blue's center) - current blue's radius < upper bound of min distance
check distance of edges and vertices
But it all depends on your data. If the blue polygons are relatively small compared to the distances between them and the red polygon, then this approach should work nicely, but if they are very close, you won't save anything (many of them will be close enough). And another thing -- If these polygons don't have many vertices (like if most of them were triangles), then it might be almost as fast to just check each red edge against each blue edge.
hope it helps
A: As others have mentioned using bounding areas (boxes, circles) may allow you to discard some polygon-polygon interactions. There are several strategies for this, e.g.
*
*Pick any blue polygon and find the distance from the red one. Now pick any other polygon. If the minimum distance between the bounding areas is greater than the already found distance you can ignore this polygon. Continue for all polygons.
*Find the minimum distance/centroid distance between the red polygon and all the blue polygons. Sort the distances and consider the smallest distance first. Calculate the actual minimum distance and continue through the sorted list until the maximum distance between the polygons is greater than the minimum distance found so far.
Your choice of circles/axially aligned boxes, or oriented boxes can have a great affect on performance of the algorithm, dependent on the actual layout of the input polygons.
For the actual minimum distance calculation you could use Yang et al's 'A new fast algorithm for computing the distance between two disjoint convex polygons based on Voronoi diagram' which is O(log n + log m).
A: Gotta run off to a funeral in a sec, but if you break your polygons down into convex subpolies, there are some optimizations you can do. You can do a binary searches on each poly to find the closest vertex, and then I believe the closest point should either be that vertex, or an adjacent edge. This means you should be able to do it in log(log m * n) where m is the average number of vertices on a poly, and n is the number of polies. This is kind of hastey, so it could be wrong. Will give more details later if wanted.
A: I doubt there is better solution than calculating the distance between the red one and every blue one and sorting these by length.
Regarding sorting, usually QuickSort is hard to beat in performance (an optimized one, that cuts off recursion if size goes below 7 items and switches to something like InsertionSort, maybe ShellSort).
Thus I guess the question is how to quickly calculate the distance between two polygons, after all you need to make this computation 50 times.
The following approach will work for 3D as well, but is probably not the fastest one:
Minimum Polygon Distance in 2D Space
The question is, are you willing to trade accuracy for speed? E.g. you can pack all polygons into bounding boxes, where the sides of the boxes are parallel to the coordinate system axes. 3D games use this approach pretty often. Therefor you need to find the maximum and minimum values for every coordinate (x, y, z) to construct the virtual bounding box. Calculating the distances of these bounding boxes is then a pretty trivial task.
Here's an example image of more advanced bounding boxes, that are not parallel to the coordinate system axes:
Oriented Bounding Boxes - OBB
However, this makes the distance calculation less trivial. It is used for collision detection, as you don't need to know the distance for that, you only need to know if one edge of one bounding box lies within another bounding box.
The following image shows an axes aligned bounding box:
Axes Aligned Bounding Box - AABB
OOBs are more accurate, AABBs are faster. Maybe you'd like to read this article:
Advanced Collision Detection Techniques
This is always assuming, that you are willing to trade precision for speed. If precision is more important than speed, you may need a more advanced technique.
A: You could start by comparing the distance between the bounding boxes. Testing the distance between rectangles is easier than testing the distance between polygons, and you can immediately eliminate any polygons that are more than nearest_rect + its_diagonal away (possibly you can refine that even more). Then, you can test the remaining polygons to find the closest polygon.
There are algorithms for finding polygon proximity - I'm sure Wikipedia has a good review of them. If I recall correctly, those that only allow convex polygons are substantially faster.
A: I believe what you are looking for is the A* algorithm, its used in pathfinding.
A: The naive approach is to find the distance between the red and 50 blue objects -- so you're looking at 50 3d Pythagorean calculations + sorting to find the answer. That would only really work for finding the distance between center points though.
If you want arbitrary polygons, maybe your best best is a raytracing solution that emits rays from the surface of the red polygon with respect to the normal, and reports when another polygon is hit.
A hybrid might work -- we could find the distance from the center points, assuming we had some notion of the relative size of the blue polygons, we could cull the result set to the closest among those, then use raytracing to narrow down the truly closest polygon(s).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Is .NET a write once, run anywhere (WORA) platform like Java claims to be? I remember Sun's slogan so vividly... "Write Once, Run Anywhere". The idea being that since programs are compiled into standard byte codes, any device with a Java Virtual Machine could run it. Over the years, Java seems to have made it onto many platforms/devices.
Is this the intention or was it ever the intention of .NET. If so, what kind of efforts are being put forth to make this a reality?
A: To correct some comments by others here, .Net was ALWAYS intended to be multi-platform. That is why Microsoft separated the namespaces into "System.*" (which were platform-neutral) and "Microsoft.*" (which were Windows specific).
A: There is Mono which runs on Linux, Solaris and OS X. In practice .Net is still pretty much a Windows-only platform. It's not really in Microsoft's interests to push it to be WORA, on the contrary. Appearing to be cross-platform however is. A lot of people have been really paranoid about Mono on Linux. MS's supposed strategy is to first let it grow to be an important part of the Linux application platform and then release the lawyers. I wouldn't bet my future on .Net's portability.
A: The answer is a very shaky Yes. The moment you include an external library, the answer changes to No.
For example, Microsoft has no 64-bit JET driver. JET is used by .NET to access MS Access databases.
Any applications compiled for the Any CPU target that use MS Access databases will fail on a 64-bit version of Windows.
(This is ignoring that said applications are not portable to Mono.)
A: Microsoft has never made those claims but they ARE making moves in the WORA arena. Silverlight 2.0 for example will use a subset of the .NET framework and be available on Windows, Linux (through the Moonlight project), MacOS, Windows Mobile, and Nokia handsets.
As others have mentioned, the Mono project has also brought the framework to multiple environments.
A: To put this in context, in many people's view Java never delivered on its "Write Once Run Anywhere" promise either.
At best what you got was "Write Once Debug Everywhere" or "Write Once Looks like crap Everywhere"
The successful CLR based applications have all been written using a graphical framework that is native to the target platform.
For example the following highly successful linux applications where written using c# bindings to GTK called GTK# and not using winforms like you would expect:
Banshee - music player like itunes
fspot - photo manager
TomBoy - notes program
GnomeDo - Quick launcher and dock
Equally successful windows .net applications are not written using GTK# (even though it is cross platform) they are written using winforms or WPF.
When google came to make Chrome they didn't try to use a cross platform GUI framework, instead they choose to use native GUI frameworks on each platform. Why? because that way the application, fits properly into it's environment, that way it looks, feels and acts like its native to the operating system its on.
Basically when you try to have write once run anywhere you have to make serious compromises and what you end up with is something that doesn't really work right anywhere.
The industry has largely given up on the lofty goal of write once run anywhere, as a nice idea which didn't work out in practice.
The best approach with mono/.net is to share your lower level binaries and to use a native gui framework on each target platform. GTK# on linux, winforms or WPF on windows, CocoaSharp on Mac. This way your application will look and feel like a native app.
A: With Mono we're getting pretty close, and with SilverLight we're allready there.
A: I don't think the official "intention" of .NET was WORA. I think that you could safely say that .NET was designed so that it would always run on future MS OS's. But there is nothing that precludes .NET from running on other platforms. Mono is an example of an implementation of the .NET runtime for an OS other than Windows.
A: Yes, this was a goal of .NET although I don't think it had the same emphasis as it did in Java. Currently, the only effor that I know of is the Mono project that is creating a version of the CLI which runs on Linux.
Interestingly enough, Silverlight actually has a slimmed down version of the CLR which can run on both Windows and Mac, which allows the same Silverlight app to run on both platforms unchanged.
A: It's theoretically possible, since the CLR (.Net's "virtual machine") complies with an open standard (the CLI). The question is what other implementations there are of that standard. Mono is another work in progress, but it's the only other one I know of.
A: I think that the idea with .NET is that it is a "Write Once, Run Anywhere (that Microsoft chooses)". However, the Mono project is slowly changing the situation.
A: It will never be supported on as many platforms as Java, IMHO.
The only effort is Mono, not sponsored by Microsoft.
Check here on SO and on the official site
A: In theory, yes. .Net Assemblies are bytecodes, which are converted to native code upon startup, using a JIT ("just-in-Time") compiler.
In practice, there aren't many platforms beyond Windows which have a .Net JIT compiler. There's one for Linux, called MONO.
Don't know about Mac, Sun etc...
A: Theoretically, the language is designed to be compiled into bytecode like Java which is interpreted by the Common Language Runtime, a mechanism that also allows several languages (not just C#) to work together and run on the .NET framework.
However, Microsoft has only developed the CLR for Windows. There are other non-MS alternatives being developed, the most prominent being Mono, a CLR implementation or a number of platforms (see the link).
So in theory yes, in practice - we'll see.
A: Yes and no. Parts of the .NET environment are standards and could be openly adopted.
For example, the runtime (CLR) has a portable version called Mono which is multi platform, open source and is used by (for example) Second Life.
A: The intention, or at least the pitch, was for this to be the case. The reality is that .NET can't really run on other platforms. The only major exception is Mono, which is an open source project. It's essentially a rewrite of the .NET runtime (the equivalent of the java virtual machine) that works on Linux, Solaris, Mac OS X, Windows, and Unix.
It's been fairly successful, but it's not officially supported.
If you're thinking of getting your monolithic Acme corp employer to adopt .Net and Linux, forget it. Realistically, with .NET, you're on Windows machines, period.
A: Yes, .NET has the Common Language Runtime (CLR) which is the .NET equivalent to the JVM. Microsoft does not support it on as many platforms as Java but with the help of the Mono project it is possible to achive cross platform applications with the usual caveats.
Bear in mind that .NET is more than just the CLR. It is a whole platform.
A: Since .NET is only available (officially) on Windows, then not, it isn't write one, run anywhere. However the Mono team are making a good go at helping spread .NET beyond Windows, but they are always way behind the official stuff.
A: I don't think that it was the original plan, for Microsoft, to create runtimes for every platform and device, but they encouraged this by using a documented (?) intermediate language.
A: Multiplatform was of course in the vision.. right now mono does a good job of implementing the runtime for other os.
Mono
A: Short answer -- no, Microsoft only supports MS operating systems (including Windows Mobile) for .NET.
Long answer -- there are public open-source projects to replicate the .NET framework for linux and other OSs, notably Rotor and Mono. They don't support everything, but you can deploy a lot of .NET code, including silverlight.
A: That depends on your definition of "Anywhere".
There are several flavors of Java virtual machine and of .Net framework.
And most of the time you can't just write code for a desktop vm/framework and expect it to run on a mobile phone one.
So. in a sense, even Java is not really pure "Write Once, Run Anywhere".
It is true, however, that Java's VM is currently running on several operating systems while .Net framework runs only on Windows devices.
There is one interesting initiative called "Mono" which offers .Net support on Linux, Solaris, Mac OS X, Windows, and Unix. Read here: Mono Site
A: dotNet can be, because of the CLR which is similar in function to the JVM.
But i dont believe MS had any intention of it being.
http://www.mono-project.com/Main_Page
might be useful, but its not a MS product.
Btw, much like how the wide spectrum of j2ee containers cloud the WORA concept for j2ee apps, ASP.NET apps running on anything besides IIS wouldnt really work the same across disparate platforms.
A: I don't think this was ever really a design goal of .NET - Microsoft has no particular interest in people writing software for non-Windows platforms ....
However, there is The Mono project (http://www.mono-project.com), which is "an open development initiative sponsored by Novell to develop an open source, UNIX version of the .NET development platform."
A: Given responses from others I'm still unclear as to whether it was an actual intention of Microsoft to have .NET be a WORA initiative. The only way to really know I guess is to have somebody from the Microsoft .NET team chime in on this.
Since we cannot definitively know the original WORA intentions of .NET we can point to efforts that are attempting to make this a reality (as previous answers have talked about).
[Mono](http://en.wikipedia.org/wiki/Mono_(software))
This effort is an initiative happening outside of Microsoft.
Mono is a project led by Novell (formerly by Ximian) to create an Ecma standard compliant .NET compatible set of tools, including among others a C# compiler and a Common Language Runtime. Mono can be run on Linux, BSD, UNIX, Mac OS X, Solaris and Windows operating systems.
Silverlight
This effort is being heavily pursued by Microsoft. Silverlight 2.0 implements a version of the framework that is the same as .NET 3.0 and seems to be an attempt to successfully deliver the framework to multiple platforms through the browser.
It is compatible with multiple web browser products used on Microsoft Windows and Mac OS X operating systems. Mobile devices, starting with Windows Mobile 6 and Symbian (Series 60) phones, will also be supported.
While it does not specifically address bring functionality to GNU/Linux there is apparently a third-party free software implementation named [Moonlight](http://en.wikipedia.org/wiki/Moonlight_(runtime)).
This seems to be what we currently know, but as stated earlier, it would be very helpful if somebody from the .NET team could pitch in on this one to properly clarify if WORA was in fact an original initiative.
A: It was most assuredly meant to be WORA. It's just MS figured Anywhere and Everywhere would be Windows by now. Who knew Linux and the MacOS would still be around. But judging by all the Macs at the PDC, I guess they were either half right or half wrong!
A: If WORA was really an original goal, then I guess we'd see .NET implementations on all the major platforms by now, fully supported by Microsoft. I seem to recall that at the time Sun was shouting WORA from the rooftops, Microsoft's riposte was "Write Any (language) Run on One (platform)" (WARO:-). As somebody else mentioned, I think they've always been firm backers of WORASLAIW (Write Once Run Anywhere So Long As Its Windows)
As you point out, they seem to be changing tack a bit with Silverlight to try and get a piece of the Flash/Flex action now that the battlefield has shifted significantly away from the desktop and towards the browser.
A: But it IS multiplataform Win9x/WinNT/Mobile
A: If Microsoft were serious about dotnet on other non windows platforms they would have released the class libraries for reuse by others ajoiding the need to rewrite the same libs again. Sun on the other hand has done this meaning less barriers are present ifnone wishesto port to another platform. Natually with java one still needs to write a vm and do the native stuff but it helps avoid a headache that is reimplementing the entire class library. The standardization of the language wS a marketing ploy to grab jon technical folk. A language without libs is worthless. Try doing your next project withnknly the prjkitive types ... That's right write your own string class etc and tell me how helpful a standardiSe language is without any libs available...
A: I think the idea was to create inter-operability between the different programming languages, not WORA.
A: .Net Core makes .Net "almost" Write Once Run Anywhere.
But there are subtle differences -
*
*.Net Core is not really .Net
*With .Net Core, you write once but build multiple times, once for each specific target OS. Whereas in Java binaries are built once and can be run on any supported OS.
dotnet build --runtime ubuntu.18.04-x64
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: SQLBindParameter to prepare for SQLPutData using C++ and SQL Native Client I'm trying to use SQLBindParameter to prepare my driver for input via SQLPutData. The field in the database is a TEXT field. My function is crafted based on MS's example here:
http://msdn.microsoft.com/en-us/library/ms713824(VS.85).aspx.
I've setup the environment, made the connection, and prepared my statement successfully but when I call SQLBindParam (using code below) it consistently fails reporting: [Microsoft][SQL Native Client]Invalid precision value
int col_num = 1;
SQLINTEGER length = very_long_string.length( );
retcode = SQLBindParameter( StatementHandle,
col_num,
SQL_PARAM_INPUT,
SQL_C_BINARY,
SQL_LONGVARBINARY,
NULL,
NULL,
(SQLPOINTER) col_num,
NULL,
&length );
The above relies on the driver in use returning "N" for the SQL_NEED_LONG_DATA_LEN information type in SQLGetInfo. My driver returns "Y". How do I bind so that I can use SQLPutData?
A: Though it doesn't look just like the documentation's example code, I found the following solution to work for what I'm trying to accomplish. Thanks gbjbaanb for making me retest my input combinations to SQLBindParameter.
SQLINTEGER length;
RETCODE retcode = SQLBindParameter( StatementHandle,
col_num, // position of the parameter in the query
SQL_PARAM_INPUT,
SQL_C_CHAR,
SQL_VARCHAR,
data_length, // size of our data
NULL, // decimal precision: not used our data types
&my_string, // SQLParamData will return this value later to indicate what data it's looking for so let's pass in the address of our std::string
data_length,
&length ); // it needs a length buffer
// length in the following operation must still exist when SQLExecDirect or SQLExecute is called
// in my code, I used a pointer on the heap for this.
length = SQL_LEN_DATA_AT_EXEC( data_length );
After a statement is executed, you can use SQLParamData to determine what data SQL wants you to send it as follows:
std::string* my_string;
// set string pointer to value given to SQLBindParameter
retcode = SQLParamData( StatementHandle, (SQLPOINTER*) &my_string );
Finally, use SQLPutData to send the contents of your string to SQL:
// send data in chunks until everything is sent
SQLINTEGER len;
for ( int i(0); i < my_string->length( ); i += CHUNK_SIZE )
{
std::string substr = my_string->substr( i, CHUNK_SIZE );
len = substr.length( );
retcode = SQLPutData( StatementHandle, (SQLPOINTER) substr.c_str( ), len );
}
A: you're passing NULL as the buffer length, this is an in/out param that shoudl be the size of the col_num parameter. Also, you should pass a value for the ColumnSize or DecimalDigits parameters.
http://msdn.microsoft.com/en-us/library/ms710963(VS.85).aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Setting the default ssh key location ssh will look for its keys by default in the ~/.ssh folder. I want to force it to always look in another location.
The workaround I'm using is to add the keys from the non-standard location to the agent:
ssh-agent
ssh-add /path/to/where/keys/really/are/id_rsa
(on Linux and MingW32 shell on Windows)
A: If you are only looking to point to a different location for you identity file, the you can modify your ~/.ssh/config file with the following entry:
IdentityFile ~/.foo/identity
man ssh_config to find other config options.
A: man ssh gives me this options would could be useful.
-i identity_file
Selects a file from which the identity (private key) for RSA or
DSA authentication is read. The default is ~/.ssh/identity for
protocol version 1, and ~/.ssh/id_rsa and ~/.ssh/id_dsa for pro-
tocol version 2. Identity files may also be specified on a per-
host basis in the configuration file. It is possible to have
multiple -i options (and multiple identities specified in config-
uration files).
So you could create an alias in your bash config with something like
alias ssh="ssh -i /path/to/private_key"
I haven't looked into a ssh configuration file, but like the -i option this too could be aliased
-F configfile
Specifies an alternative per-user configuration file. If a configuration file is given on the command line, the system-wide configuration file (/etc/ssh/ssh_config) will be ignored. The default for the per-user configuration file is ~/.ssh/config.
A: Update for Git Bash on Windows 10: on my system, git bash app will work over the ssh layer (brought by OpenSSH) look for an environment variable called HOME (To Windows key and type in "env" to edit env vars). If this variable points to a place that doesn't exist, git bash may never open.
Like on Linux, Git Bash app will look for its config file in %HOME%\.ssh.
e.g. If you set HOME to C:\Users\Yourname, than it will look for C:\Users\Yourname\.ssh
Finally, within config text file, git bash will look for IdentifyFile path.
On Windows, set the path using cygwin notation.
e.g. to /e/var/www/certs/keys/your_passwordless_key.key
Bonus: for free, PHPStorm will use that setup. Restart IDE if you've just changed settings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
}
|
Q: What is idiomatic code? I'd be interested in some before-and-after c# examples, some non-idiomatic vs idiomatic examples. Non-c# examples would be fine as well if they get the idea across. Thanks.
A: Practically speaking, it means writing code in a consistent way, i.e. all developers who work on your code base should follow the same conventions when writing similar code constructs.
So the idiomatic way is the way that matches the style of the other code, non-idiomatic way means you are writing the kind of function but in a different way.
e.g. if you are looping a certain number of items, you could write the loop in several ways:
for (int i = 0; i < itemCount; i++)
for (int i = 1; i <= itemCount; i++)
for (int i = 0; i < itemCount; ++i)
etc
What is most important is that the chosen style is used consistently. That way people become very familiar and confident with how to use it, and when you spy a usage which looks different it can be a sign of a mistake being introduced, perhaps an off by one error, e.g.
for (int i = 1; i < itemCount; i++)
A: In PHP I sometimes encounter code like:
foreach ($array as $value) {
$trimmed[] = trim($value);
}
return $trimmed;
Which idiomatically can be implemented with:
return array_map('trim', $array);
A: Some examples:
Resource management, non idiomatic:
string content;
StreamReader sr = null;
try {
File.OpenText(path);
content = sr.ReadToEnd();
}
finally {
if (sr != null) {
sr.Close();
}
}
Idiomatic:
string content;
using (StreamReader sr = File.OpenText(path)) {
content = sr.ReadToEnd();
}
Iteration, non idiomatic:
for (int i=0;i<list.Count; i++) {
DoSomething(list[i]);
}
Also non-idiomatic:
IEnumerator e = list.GetEnumerator();
do {
DoSomenthing(e.Current);
} while (e.MoveNext());
Idiomatic:
foreach (Item item in list) {
DoSomething(item);
}
Filtering, non-idiomatic:
List<int> list2 = new List<int>();
for (int num in list1) {
if (num>100) list2.Add(num);
}
idiomatic:
var list2 = list1.Where(num=>num>100);
A: Idiomatic code is code that does a common task in the common way for your language. It's similar to a design pattern, but at a much smaller scale. Idioms differ widely by language. One idiom in C# might be to use an iterator to iterate through a collection rather than looping through it. Other languages without iterators might rely on the loop idiom.
A: Idiomatic means following the conventions of the language. You want to find the easiest and most common ways of accomplishing a task rather than porting your knowledge from a different language.
non-idiomatic python using a loop with append:
mylist = [1, 2, 3, 4]
newlist = []
for i in mylist:
newlist.append(i * 2)
idiomatic python using a list comprehension:
mylist = [1, 2, 3, 4]
newlist = [(i * 2) for i in mylist]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
}
|
Q: OO and how it works in PHP I've been writing PHP for about six years now and have got to a point where I feel I should be doing more to write better code. I know that Object Oriented code is the way to go but I can't get my head around the concept.
Can anyone explain in terms that any idiot can understand, OO and how it works in PHP or point me to an idiots guide tutorial?
A: A warning is at place: you won't learn OO programming without learning OO design! The key concept is to define the functions operating on your data together with the appropriate data. Then you can tell your objects what to do, without having to query their contents.
Surely take a look at the "Tell, don't Ask" philosophy, and the "Need to know" principle (aka the "Law of Demeter") is a very important one, too.
A: Some of the key reasons to use OO are to structure code in a similar way to how we humans like to perceive and relate to things, and exploit the benefits of economy, maintainability, reliability, and scalability.
i.e: Humankind designed the wheel thousands of years ago. We may refine it all the time, but we certainly don't need to be re-inventing it again....
1) We like to categorise things: "this one's bigger than this one", "this one costs more than that one", "this one is almost the same as that one".
2) We like to simplify things: "OK, it's a V8 liquid cooled turbo driven tractor, but I still just turn the steering wheel and press my feet on the peddles to drive it, right?".
3) We like to standardise things: "OK, let's call triangles, circles, and squares all SHAPES, and expect them all to have an AREA and a CIRCUMFERENCE".
4) We like to adapt things: "hmmm, I like that, but can I have it in Racing Green instead?".
5) We like to create blueprints: "I haven't got the time or money (or approval) to build that yet, but it WILL have a door and a roof, and some windows, and walls".
6) We like to protect things: "OK, I'll let you see the total price, but I'm hiding the mark-up I added from you!".
7) We like things to communicate with each other: "I want to access my bank balance through: my mobile; my computer; an ATM; a bank employee; etc..".
To learn how to exploit OO (and see some of the advantages) then I suggest you set yourself an exercise as homework - maybe a browser based application that deals with SHAPES such as circles, rectangles, and triangles, and keeps track of their area, colour, position, and z-index etc. Then add squares as a special case of rectangle since it is the same in respect to most of it's definition, area, etc. Just has the added condition where the height is the same as the width. To make it harder then you could make a rectangle a type of quadrangle which is a type of polygon. etc. etc.
NOTE: I wouldn't start using a PHP Framework until you are comfortable with the basics of OO programming first. They are much more powerful when you can extend classes of your own and if you can't then it's a bit like learning something by rote -> much harder!
A: Think of a thingy. Any thingy, a thingy you want to do stuff to. Say, a breakfast.
(All code is pseudocode, any resemblance to any language living, dead, or being clinically abused in the banking industry is entirely coincidental and nothing to do with your post being tagged PHP)
So you define a template for how you'd represent a breakfast. This is a class:
class Breakfast {
}
Breakfasts contain attributes. In normal non-object-oriented stuff, you might use an array for this:
$breakfast = array(
'toast_slices' => 2,
'eggs' => 2,
'egg_type' => 'fried',
'beans' => 'Hell yeah',
'bacon_rashers' => 3
);
And you'd have various functions for fiddling with it:
function does_user_want_beans($breakfast){
if (isset($breakfast['beans']) && $breakfast['beans'] != 'Hell no'){
return true;
}
return false;
}
And you've got a mess, and not just because of the beans. You've got a data structure that programmers can screw with at will, an ever-expanding collection of functions to do with the breakfast entirely divorced from the definition of the data. So instead, you might do this:
class Breakfast {
var $toast_slices = 2;
var $eggs = 2;
var $egg_type = 'fried';
var $beans = 'Hell yeah';
var $bacon_rashers = 3;
function wants_beans(){
if (isset($this->beans) && $this->beans != 'Hell no'){
return true;
}
return true;
}
function moar_magic_pig($amount = 1){
$this->bacon += $amount;
}
function cook(){
breakfast_cook($this);
}
}
And then manipulating the program's idea of Breakfast becomes a lot cleaner:
$users = fetch_list_of_users();
foreach ($users as $user){
// So this creates an instance of the Breakfast template we defined above
$breakfast = new Breakfast();
if ($user->likesBacon){
$breakfast->moar_magic_pig(4);
}
// If you find a PECL module that does this, Email me.
$breakfast->cook();
}
I think this looks cleaner, and a far neater way of representing blobs of data we want to treat as a consistent object.
There are better explanations of what OO actually is, and why it's academically better, but this is my practical reason, and it contains bacon.
A: The best advice was from: xtofl.myopenid.com ^^^^
If you don't understand the purposes of patterns, your really not going to use objects to their fullest. You need to know why inheritence, polymorphism, interfaces, factories, decorators, etc. really make design easier by addressing particular issues.
A: Instead of learning OO from scratch, I think it'd be easier if you took on a framework that facilitates object-oriented programming. It will "force" you to use the right OOP methods; you will be able to learn from the way the framework is written as to how to do OOP best.
I'd recommend the QCodo PHP5 framework http://www.qcodo.com. It has great video tutorials on how to set it up, as well as video trainings (http://www.qcodo.com/demos/).
Full disclosure: I've been developing on top of this framework for two years, and I've contributed code to their codebase (so I'm not completely impartial :-)).
A: Another pointer for learning OO:
Most OO tutorials will focus on inheritance (e.g. class X extends class Y). I think this is a bad idea. Inheritance is useful, but it can also cause problems. More importantly, inheritance isn't the point of OO. The point is abstraction; hiding the implementation details so you can work with a simple interface. Learn how to write good abstractions of your data, and you'll be in good shape. Don't sweat the inheritance stuff right away.
A: I have been in your shoes, but I saw the light after I read this book (a few times!) http://www.apress.com/book/view/9781590599099 After I read this, I really "got" it and I haven't looked back. You'll get it on Amazon.
I hope you persist, get it, and love it. When it comes together, it will make you smile.
Composition beats inheritence.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Finding errors / warnings in Visual Studio I have experienced an annoying issue with Visual Studio 2005... sometimes when I rebuild, and even if I do a Rebuild Solution, it will come back with no errors or warnings, but then when I later edit another code file, even without changing it, and rebuild, it will find an error or warning in that other file. Clearly, the earlier Rebuild Solution did not recompile that file! How can I force VS to completely recompile every file?
A: I've seen this happen before when you have multiple projects in your solution and the references get mixed up.
Say you have four projects in your solution, Common, Business, Data, and UI. Assume that Common is referenced by the other three projects.
What we want is for Common to be a "project reference" from the other three projects - they'll then pick up their copy from the build output directory of Common.
But, sometimes, one of the projects will get it's reference mixed up. Say, in this case, that UI starts referencing the copy of Common in the build output directory of Data. Now, any change that compiles "UI" without also compiling "Data" will result in two, possibly incompatible, versions of "Common" being a dependency of UI.
Another scenario is where the reference is to a binary, such as from a "lib" directory. Then, one of the projects ends up referring to a build output location instead of lib.
I don't know what causes this - but I see it all the time, unfortunately.
The fix is to go through the references of each project and find the one (or more) that point to the wrong place.
A: It might help to clean the solution prior to rebuilding -- right click on the solution in the Solution Explorer and choose "clean solution" -- this deletes temporary files and is supposed to clear out the bin and obj folders, so everything is rebuilt.
A: I'm with Guy Starbuck here, but would add that Rebuild Solution is supposed to do a Clean Solution followed by Build Solution, which should, then, have solved your issue to begin with. But VS 2005 can be terrible in this regard. Sometimes it just starts working after several rebuilds. If upgrading to 2008 isn't an option, consider manually clearing the bin folder.
A: Is this related to the Configuration Manager? There you can select which projects in your solution build. Not sure if this helps.
A: Depending on the types of warnings it is not possible if I recall correctly.
For example, warning messages for XHTML compliance are ONLY displayed when the file is open. You might check the tolerance settings inside VS to see if you can change it.
A: This sounds strange - Rebuild should build everything regardless of changes and Build should only build things that have changed.
The behaviour you've described should only happen if you have modified something that is referenced by the unchanged file so that it is now incorrect.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Uninstall Sharepoint Infrastructure Update I installed WSS Infrastructure Update and MOSS Infrastructure Update (http://technet.microsoft.com/en-us/office/sharepointserver/bb735839.aspx) and now I can't restore the content database on an older version. Do you know if there is a way to uninstall it ?
A: The only option that I can think of is to restore the backup on another machine that has the same level of updates as when the backup was done, upgrade the whole box to Infrastructure Update, backup this environment and restore it in your already-upgraded-environment.
A: There are no supported methods to uninstall updates in MOSS or WSS. Your only option is to restore a backup, which is why you should always back up everything and test the integrity of the backup before installing updates.
A: There is no such an option, as others pointed out, the only option is to restore from a backup.
When you are trying to restore a content database to a different box both should have the same set of updates installed, otherwise you might experience all kind of problems.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: org.eclipse.swt.SWTError: Item not added Does somebody know how to recover a never-starting eclipse when the error "org.eclipse.swt.SWTError: Item not added" is raising againg and again?
I'm using WebSphere Studio Site Developer (Windows) 5.1.0
The only stack trace in the .metadata/log file is:
SESSION ----------------------------------------------------------------------
!ENTRY org.eclipse.core.launcher 4 0 sep 17, 2008 16:39:00.564
!MESSAGE Exception launching the Eclipse Platform:
!STACK
java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: org.eclipse.swt.SWTError: Item not added
at java.lang.Throwable.<init>(Throwable.java)
at java.lang.Throwable.<init>(Throwable.java)
at org.eclipse.swt.SWTError.<init>(SWTError.java:82)
at org.eclipse.swt.SWTError.<init>(SWTError.java:71)
at org.eclipse.swt.SWT.error(SWT.java:2358)
at org.eclipse.swt.SWT.error(SWT.java:2262)
at org.eclipse.swt.widgets.Widget.error(Widget.java:385)
at org.eclipse.swt.widgets.Menu.createItem(Menu.java:464)
at org.eclipse.swt.widgets.MenuItem.<init>(MenuItem.java:77)
at org.eclipse.ui.internal.AcceleratorMenu.setAccelerators(AcceleratorMenu.java:177)
at org.eclipse.ui.internal.WWinKeyBindingService.updateAccelerators(WWinKeyBindingService.java:316)
at org.eclipse.ui.internal.WWinKeyBindingService.clear(WWinKeyBindingService.java:175)
at org.eclipse.ui.internal.WWinKeyBindingService.update(WWinKeyBindingService.java:267)
at org.eclipse.ui.internal.WWinKeyBindingService$1.partActivated(WWinKeyBindingService.java:107)
at org.eclipse.ui.internal.PartListenerList$1.run(PartListenerList.java:49)
at org.eclipse.core.internal.runtime.InternalPlatform.run(InternalPlatform.java:1006)
at org.eclipse.core.runtime.Platform.run(Platform.java:413)
at org.eclipse.ui.internal.PartListenerList.firePartActivated(PartListenerList.java:47)
at org.eclipse.ui.internal.WorkbenchPage.firePartActivated(WorkbenchPage.java:1180)
at org.eclipse.ui.internal.WorkbenchPage.onActivate(WorkbenchPage.java:1833)
at org.eclipse.ui.internal.WorkbenchWindow$7.run(WorkbenchWindow.java:1496)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:69)
at org.eclipse.ui.internal.WorkbenchWindow.setActivePage(WorkbenchWindow.java:1483)
at org.eclipse.ui.internal.WorkbenchWindow.restoreState(WorkbenchWindow.java:1363)
at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1263)
at org.eclipse.ui.internal.Workbench.access$10(Workbench.java:1223)
at org.eclipse.ui.internal.Workbench$12.run(Workbench.java:1141)
at org.eclipse.core.internal.runtime.InternalPlatform.run(InternalPlatform.java:1006)
at org.eclipse.core.runtime.Platform.run(Platform.java:413)
at org.eclipse.ui.internal.Workbench.openPreviousWorkbenchState(Workbench.java:1093)
at org.eclipse.ui.internal.Workbench.init(Workbench.java:870)
at org.eclipse.ui.internal.Workbench.run(Workbench.java:1373)
at org.eclipse.core.internal.boot.InternalBootLoader.run(InternalBootLoader.java:858)
at org.eclipse.core.boot.BootLoader.run(BootLoader.java:461)
at java.lang.reflect.AccessibleObject.invokeL(AccessibleObject.java:207)
at java.lang.reflect.Method.invoke(Method.java:271)
at org.eclipse.core.launcher.Main.basicRun(Main.java:291)
at org.eclipse.core.launcher.Main.run(Main.java:747)
at org.eclipse.core.launcher.Main.main(Main.java:583)
A: I had exactly the same problem. I did not restart my machine, and just used "eclipse -clean" to start eclipse. It worked. Thanks Jon for the hint.
A: Does restarting your computer resolve the problem with being able to open the workspace? There is a forum post (http://forums.sun.com/thread.jspa?messageID=3131484#3131484) that describes a similar problem with an identical stack trace as the one shown above. In the post, the author mentions that their machine was low on resources (they did not specify what type of resources were running low).
If restarting your computer does not work, you may want to try starting eclipse with the clean option:
eclipse -clean
The clean option will clean out any caches that Eclipse has created.
If all else fails, you may want to open a bug for this problem at https://bugs.eclipse.org/bugs/. Including a copy of your workspace (if possible), and including the stack trace in the bug would be helpful information for the person trying to diagnose the problem.
Good Luck!
A: Well, some things you can try are:
*
*Delete workspace .metadata dir. Obviously you will lose your workbench configuration.
*Rename your .metadata dir. Start Eclipse, and you will have a new .metadata dir. Close Eclipse, delete the new dir, and rename back the original dir. It sometimes works.
A: HI,
Check the task manager, whether any java process(java.exe or javaw.exe) running even after the closing of workbench. Kill those processes. You will get this error resolved
A: This worked when I moved the eclipse.ini from the eclipse install folder (where .exe is present). I ran into this problem when I was trying to increase the heap size in the eclipse.ini file (although I had seen this error earlier)
A: For me, I think this has something to do with my dual monitor setup and Actual Multiple Monitors that I have installed. I disabled that and the problem is gone.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Web Scripting for Java What is a good way to render data produced by a Java process in the browser?
I've made extensive use of JSP and the various associated frameworks (JSTL, Struts, Tapestry, etc), as well as more comprehensive frameworks not related to JSP (GWT, OpenLaszlo). None of the solutions have ever been entirely satisfactory - in most cases the framework is too constrained or too complex for my needs, while others would require extensive refactoring of existing code. Additionally, most frameworks seem to have performance problems.
Currently I'm leaning towards the solution of exposing my java data via a simple servlet that returns JSON, and then rendering the data using PHP or Ruby. This has the added benefit of instantly exposing my service as a web service as well, but I'm wondering if I'm reinventing the wheel here.
A: I personally use Tapestry 5 for creating webpages with Java, but I agree that it can sometimes be a bit overkill. I would look into using JAX-RS (java.net project, jsr311) it is pretty simple to use, it supports marshalling and unmarshalling objects to/from XML out of the box. It is possible to extend it to support JSON via Jettison.
There are two implementations that I have tried:
*
*Jersey - the reference implementation for JAX-RS.
*Resteasy - the implementation I prefer, good support for marshalling and unmarshalling a wide-range of formats. Also pretty stable and has more features that Jersey.
Take a look at the following code to get a feeling for what JAX-RS can do for you:
@Path("/")
class TestClass {
@GET
@Path("text")
@Produces("text/plain")
String getText() {
return "String value";
}
}
This tiny class will expose itself at the root of the server (@Path on the class), then expose the getText() method at the URI /text and allow access to it via HTTP GET. The @Produces annotation tells the JAX-RS framework to attempt to turn the result of the method into plain text.
The easiest way to learn about what is possible with JAX-RS is to read the specification.
A: We're using Stripes. It gives you more structure than straight servlets, but it lets you control your urls through a @UrlBinding annotation. We use it to stream xml and json back to the browser for ajax stuff.
You could easily consume it with another technology if you wanted to go that route, but you may actually enjoy developing with stripes.
A: Check out Restlet for a good framework for exposing your domain model as REST services (including JSON and trivial XML output).
For rendering your info, maybe you can use GWT on the client side and consume your data services? If GWT doesn't float your boat, then maybe JQuery would?
A: Perhaps you could generate the data as XML and render it using XSLT?
I'm not sure PHP or Ruby are the answer if Java isn't fast enough for you!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I cycle through pages? Here's a challenge that I was tasked with recently. I still haven't figured out the best way to do it, maybe someone else has an idea.
Using PHP and/or HTML, create a page that cycles through any number of other pages at a given interval.
For instance, we would load this page and it would take us to google for 20 seconds, then on to yahoo for 10 seconds, then on to stackoverflow for 180 seconds and so on an so forth.
A: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html lang="en">
<head>
<title>Dashboard Example</title>
<style type="text/css">
body, html { margin: 0; padding: 0; width: 100%; height: 100%; overflow: hidden; }
iframe { border: none; }
</style>
<script type="text/javascript">
var Dash = {
nextIndex: 0,
dashboards: [
{url: "http://www.google.com", time: 5},
{url: "http://www.yahoo.com", time: 10},
{url: "http://www.stackoverflow.com", time: 15}
],
display: function()
{
var dashboard = Dash.dashboards[Dash.nextIndex];
frames["displayArea"].location.href = dashboard.url;
Dash.nextIndex = (Dash.nextIndex + 1) % Dash.dashboards.length;
setTimeout(Dash.display, dashboard.time * 1000);
}
};
window.onload = Dash.display;
</script>
</head>
<body>
<iframe name="displayArea" width="100%" height="100%"></iframe>
</body>
</html>
A: Use a separate iframe for the content, then use Javascript to delay() a period of time and set the iframe's location property.
A: When you are taken to another site (e.g. Google) control passes to that site, so in order for your script to keep running, you'd need to load the new site in a frame, and keep your script (which I'd imagine could most readily be implemented using Javascript) in another frame (which could be made very small so you can't see it).
A: I managed to create this thing. It's not pretty but it does work.
<?php
# Path the config file, full or relative.
$configfile="config.conf";
$tempfile="tmp.html";
# Read the file into an array
$farray=file($configfile);
# Count array elements
$count=count($farray);
if(!isset($_GET['s'])){
$s=0;
}else{
$s=$_GET['s'];
if($s==($count-1)){ # -1 because of the offset in starting our loop at 0 instead of 1
$s=0;
}else{
$s=$_GET['s']+1; # Increment the counter
}
}
# Get the line from the array
$entry=$farray[$s];
# Break the line on the comma into 2 entries
$arr=explode(",",$entry);
# Now each line is in 2 pieces - URL and TimeDelay
$url=strtolower($arr[0]);
# Check our url to see if it has an HTTP prepended, if it doesn't, give it one.
$check=strstr($url,"http://");
if($check==FALSE){
$url="http://".$url;
}
# Trim unwanted crap from the time
$time=rtrim($arr[1]);
# Get a handle to the temp file
$tmphandle=fopen($tempfile,"w");
# What does our meta refresh look like?
$meta="<meta http-equiv=\"refresh\" content=\"".$time.";url=index.php?s=".$s."\">\n";
# The iframe to display
$content="<iframe src =\"".$url."\" height=\"100%\" width=\"100%\"></iframe>";
# roll up the meta and content to be written
$str=$meta.$content;
# Write it
fwrite($tmphandle,$str);
# Close the handle
fclose($tmphandle);
# Load the page
die(header("Location:tmp.html"));
?>
Config files looks like (URL, Time to stay on that page):
google.com,5
http://yahoo.com,10
A: You could do this with JavaScript quite easily. It would help to know the deployment environment. Is it a kiosk or something?
For the JavaScript solution, serve up a page that contains a JavaScript that will pop open a new browser window. The controller page will then cause the new browser window to cycle through a series of pages. That's about the simplest way to do this that I can think of.
Edit: Agree with Simon's comment. This solution would work best in a kiosk or large, public display environment where the pages are just being shown without any user interaction.
A: Depends on your exact requirements. If you allow JavaScript and allow frames then you can stick a hidden frame within a frameset on your page into which you load some JavaScript. This JavaScript will then control the content of the main frame using the window.location object and setTimeout function.
The downside would be that the user's address bar would not update with the new URL. I'm not sure how this would achievable otherwise. If you can clarify the constraints I can provide more help.
Edit - Shad's suggestion is a possibility although unless the user triggers the action the browser may block the popup. Again you'd have to clarify whether a popup is allowable.
A: Create a wrapper HTML page with an IFrame in it, sized at 100% x 100%. Then add in some javascript that changes the src of the IFrame between set intervals.
A: I think it would have to work like gabbly.com, which sucks in other websites and displays them with its own content over it.
Once you read the other site in and were ready to display it, you couldn't really do it "in PHP"; you would have to send an HTML redirect meta-tag:
<meta HTTP-EQUIV="REFRESH" content="15; url=http://www.thepagecycler.com/nextpage.html">
Or you could use Javascript instead of the meta-tag.
A: This is not doable in a PHP script, unless you want to edit the redirect.... PHP is a back end technology; you're going to need to do this in Javascript or the like.
The best you're going to do, as far as I know, is to create a text file on your web server and load a different HTTP address based on time out of that text file, then redirect the browser to the site found in that text file.
A: The first solution that jumps to mind is to do this in a frameset. Hide one of the frames, and the other display the pages in question. Drive the page transitions with Javascript from the hidden frame.
function RefreshFrame()
{
parent.VisibleFrame.location.href = urlArray[i];
i++;
if(i < urlArray.length) SetTimeout("RefreshFrame()", 20000);
}
var i = 0;
var urlArray = ['http://google.com','http://yahoo.com', 'http://www.search.com'];
RefreshFrame();
In this example the Javascript would be in the hiddend frame, and you would name your visible frame "VisibleFrame".
Disclaimer: I just wrote this code in the comment window and have not tested it
A: The theory behind the request is basically the ability to cycle through web page dashboards for various systems from a "kiosk" PC. I oversee a data center and we have several monitor systems that allow me view dashboards for temps, system up time, etc etc.
The idea is load a page that would cycle from dashboard to dashboard remaining on each for an amount of time specified by me, 1 minute on this board, 30 seconds on the next board, 2 minutes on the next and so on.. Javascript is absolutely allowable (though I have little experience with it). My mediums of choice are PHP/HTML and I'm not seeing a way to make this happen cleanly with just them..
A: There's a bunch of ways you can do this, iv written several scripts and tools with everything from JS to Ruby
In the end It was much easier to use http://dashboardrotator.com . It handled browser restarts, memory allocation and accidental window closure for me with a nice simple GUI.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How do I prevent static variable sharing in the .NET runtime? I'm working on a game (C#) that uses a Robocode-like programming model: participants inherit a base Class and add strategic behaviors. The game then loads instances of participants' Classes and the competition begins. Unfortunately, participants can "cheat" by sharing static variables between instances of their competitor Class.
How do I prevent static variable sharing between Class instances in a .NET language? I know this is accomplished in Java by using a separate ClassLoader per instance. What's the .NET equivalent?
Further, my testing shows that separate AppDomains only work when loading a Class that extends MarshalByRefObject. I guess this makes sense - if you simply load a Serializable Class, the Class is copied into the current AppDomain so a second object from a different AppDomain will share its static vars. MarshalByRefObject guarantees that only a proxy is loaded into the current AppDomain and the statics stay behind in the loading AppDomain. See also: http://blogs.msdn.com/ericlippert/archive/2004/05/27/143203.aspx
A: Load each competitor into a different AppDomain.
A: static variables are per-AppDomain, so you could look into using different AppDomains, but I totally don't know what other consequences that may have.
Otherwise you could check the classes beforehand using reflection, and reject any classes that have static members.
A: if(typeof(CompetitorClass).GetFields(BindingFlags.Static))
{
// take necessary steps against cheater!
}
A: I don't have a specific answer, but I would look at the .NET Terrarium project. All participants are user loaded DLLs. They have done a lot of neat stuff to prevent unsafe and cheater code from loading/executing.
Justin Rogers has written extensively on Terrarium implementation details.
A: That's solution in .NET plugin for real Robocode I'm working on.
A: Would the [ThreadStatic] attribute work for you? As long as your players are in different threads, it would probably do the trick. From the MSDN site:
A static (Shared in Visual Basic)
field marked with
ThreadStaticAttribute is not shared
between threads. Each executing thread
has a separate instance of the field,
and independently sets and gets values
for that field. If the field is
accessed on a different thread, it
will contain a different value.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Cross-Database information_schema Joins in SQL Server I am attempting to provide a general solution for the migration of data from one schema version to another. A problem arises when the column data type from the source schema does not match that of the destination. I would like to create a query that will perform a preliminary compare on the columns data types to return which columns need to be fixed before migration is possible.
My current approach is to return the table and column names from information_schema.columns where DATA_TYPE's between catalogs do not match. However, querying information_schema directly will only return results from the catalog of the connection.
Has anyone written a query like this?
A: I do this by querying the system tables directly. Look into the syscolumns and sysobjects tables. You can also join across linked servers too
select t1.name as tname,c1.name as cname
from adventureworks.dbo.syscolumns c1
join adventureworks.dbo.sysobjects t1 on c1.id = t1.id
where t1.type = 'U'
order by t1.name,c1.colorder
A: I have always been in the fortunate position to have Red Gate Schema compare which i think would do what you ask. Cheap at twice the price!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Service factory: extremely long path/filenames problems I have been trying out Service Factory and have run into some problems in regards to long filenames - surpassing the limit in Vista/XP. The problem is that when generating code from the models service factory prefixes everything with the namespace specified. Making the folder structure huge. For example starting in
c:\work\sftest\MyWebService
I create each of the models with moderate length of names in data contracts and service interface. I set the namespace to be MyCompany.SFTest.MyWebservice
After generating code I end up with
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Business Logic
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Resource Access
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.DataContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.FaultContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.MessageContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceContracts
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Tests
Under each of the folders is a project file with the same prefix
c:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj
This blows up the recipe as windows can't accept filenames exceeding a specific length.
Is it necessary to explicitly include the namespace in each of the foldernames?
Obviously at some point I might want to branch a service to another location but for the same reason as above might be unable to.
Is there a workaround for this?
A: I don't know Service Factory so i am not sure if this will help. Anyway: maybe the article Naming a File or Directory from MSDN can help.
Windows API has a maximum length for paths (MAX_PATH = 260). If you want to use longer pathnames you will have to use the Unicode versions of the API by prefixing your paths with "\\?\", i. e. use
"\\?\C:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj"
instead of
"C:\work\sftest\MyWebService\MyCompany.SFTest.MyWebService\Source\Service Interface\MyCompany.SFTest.MyWebService.ServiceImplementation\MyCompany.SFTest.MyWebService.ServiceImplementation.proj"
Does Service Factory allow that notation?
A: We had exactly this problem and we got around it by making our service factory a very thin wrapper around a normal library (that has been marked up with the WCF stuff). This gave us a normally deep project (the factory) and then a stunningly deep wrapper factory (without all that extract interface and logic and what not).
We still have some problems - but mainly in the client side - our servers are for the most part trouble free.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I implement the Post Commit Hook with Trac & SVN in a Windows Environment? I'm running in a windows environment with Trac / SVN and I want commits to the repository to integrate to Trac and close the bugs that were noted in the SVN Comment.
I know there's some post commit hooks to do that, but there's not much information about how to do it on windows.
Anyone done it successfully? And what were the steps you followed to achive it?
Here's the hook I need to put in place in SVN, but I'm not exactly sure how to do this in the Windows environment.
Trac Post Commit Hook
A: Benjamin's answer is close, but on Windows you need to give the hook script files an executable extension, such as .bat or .cmd. I use .cmd. You can take the template scripts, which are unix shell scripts, shell scripts and convert them to .bat/.cmd syntax.
But to answer the question of integrating with Trac, follow these steps.
*
*Ensure that Python.exe is on the system path. This will make your life easier.
*Create post-commit.cmd in \hooks folder. This is the actual hook script that Subversion will execute on the post-commit event.
@ECHO OFF
:: POST-COMMIT HOOK
::
:: The post-commit hook is invoked after a commit. Subversion runs
:: this hook by invoking a program (script, executable, binary, etc.)
:: named 'post-commit' (for which this file is a template) with the
:: following ordered arguments:
::
:: [1] REPOS-PATH (the path to this repository)
:: [2] REV (the number of the revision just committed)
::
:: The default working directory for the invocation is undefined, so
:: the program should set one explicitly if it cares.
::
:: Because the commit has already completed and cannot be undone,
:: the exit code of the hook program is ignored. The hook program
:: can use the 'svnlook' utility to help it examine the
:: newly-committed tree.
::
:: On a Unix system, the normal procedure is to have 'post-commit'
:: invoke other programs to do the real work, though it may do the
:: work itself too.
::
:: Note that 'post-commit' must be executable by the user(s) who will
:: invoke it (typically the user httpd runs as), and that user must
:: have filesystem-level permission to access the repository.
::
:: On a Windows system, you should name the hook program
:: 'post-commit.bat' or 'post-commit.exe',
:: but the basic idea is the same.
::
:: The hook program typically does not inherit the environment of
:: its parent process. For example, a common problem is for the
:: PATH environment variable to not be set to its usual value, so
:: that subprograms fail to launch unless invoked via absolute path.
:: If you're having unexpected problems with a hook program, the
:: culprit may be unusual (or missing) environment variables.
::
:: Here is an example hook script, for a Unix /bin/sh interpreter.
:: For more examples and pre-written hooks, see those in
:: the Subversion repository at
:: http://svn.collab.net/repos/svn/trunk/tools/hook-scripts/ and
:: http://svn.collab.net/repos/svn/trunk/contrib/hook-scripts/
setlocal
:: Debugging setup
:: 1. Make a copy of this file.
:: 2. Enable the command below to call the copied file.
:: 3. Remove all other commands
::call %~dp0post-commit-run.cmd %* > %1/hooks/post-commit.log 2>&1
:: Call Trac post-commit hook
call %~dp0trac-post-commit.cmd %* || exit 1
endlocal
*Create trac-post-commit.cmd in \hooks folder:
@ECHO OFF
::
:: Trac post-commit-hook script for Windows
::
:: Contributed by markus, modified by cboos.
:: Usage:
::
:: 1) Insert the following line in your post-commit.bat script
::
:: call %~dp0\trac-post-commit-hook.cmd %1 %2
::
:: 2) Check the 'Modify paths' section below, be sure to set at least TRAC_ENV
setlocal
:: ----------------------------------------------------------
:: Modify paths here:
:: -- this one *must* be set
SET TRAC_ENV=D:\projects\trac\membershipdnn
:: -- set if Python is not in the system path
SET PYTHON_PATH=
:: -- set to the folder containing trac/ if installed in a non-standard location
SET TRAC_PATH=
:: ----------------------------------------------------------
:: Do not execute hook if trac environment does not exist
IF NOT EXIST %TRAC_ENV% GOTO :EOF
set PATH=%PYTHON_PATH%;%PATH%
set PYTHONPATH=%TRAC_PATH%;%PYTHONPATH%
SET REV=%2
:: Resolve ticket references (fixes, closes, refs, etc.)
Python "%~dp0trac-post-commit-resolve-ticket-ref.py" -p "%TRAC_ENV%" -r "%REV%"
endlocal
*Create trac-post-commit-resolve-ticket-ref.py in \hooks folder. I used the same script from EdgeWall, only I renamed it to better clarify its purpose.
A: Alright, now that I've got some time to post my experience after figuring this all out, and thanks to Craig for getting me on the right track. Here's what you need to do (at least with SVN v1.4 and Trac v0.10.3):
*
*Locate your SVN repository that you want to enable the Post Commit Hook for.
*inside the SVN repository there's a directory called hooks, this is where you'll be placing the post commit hook.
*create a file post-commit.bat (this is the batch file that's automatically called by SVN post commit).
*Place the following code inside the post-commit.bat file ( this will call your post commit cmd file passing in the parameters that SVN automatically passes %1 is the repository, %2 is the revision that was committed.
%~dp0\trac-post-commit-hook.cmd %1 %2
*Now create the trac-post-commit-hook.cmd file as follows:
@ECHO OFF :: :: Trac
post-commit-hook script for
Windows :: :: Contributed by
markus, modified by cboos. ::
Usage: :: :: 1) Insert the
following line in your post-commit.bat
script :: :: call
%~dp0\trac-post-commit-hook.cmd %1
%2 :: :: 2) Check the 'Modify
paths' section below, be sure to set
at least TRAC_ENV ::
---------------------------------------------------------- :: Modify paths here: :: --
this one must be set SET
TRAC_ENV=C:\trac\MySpecialProject
:: -- set if Python is not in the
system path :: SET
PYTHON_PATH= :: -- set to the
folder containing trac/ if installed
in a non-standard location :: SET
TRAC_PATH= ::
---------------------------------------------------------- :: Do not execute hook if trac
environment does not exist IF NOT
EXIST %TRAC_ENV% GOTO :EOF
set PATH=%PYTHON_PATH%;%PATH% set
PYTHONPATH=%TRAC_PATH%;%PYTHONPATH%
SET REV=%2 :: GET THE
AUTHOR AND THE LOG MESSAGE for /F
%%A in ('svnlook author -r %REV% %1')
do set AUTHOR=%%A for /F
"delims==" %%B in ('svnlook log -r
%REV% %1') do set LOG=%%B ::
CALL THE PYTHON SCRIPT Python
"%~dp0\trac-post-commit-hook" -p
"%TRAC_ENV%" -r "%REV%" -u "%AUTHOR%"
-m "%LOG%"
The most important parts here are to set your TRAC_ENV which is the path to the repository root (SET TRAC_ENV=C:\trac\MySpecialProject)
The next MAJORLY IMPORTANT THING in this script is to do the following:
:: GET THE AUTHOR AND THE LOG
MESSAGE for /F %%A in ('svnlook
author -r %REV% %1') do set
AUTHOR=%%A for /F "delims==" %%B
in ('svnlook log -r %REV% %1') do set
LOG=%%B
if you see in the script file above I'm using svnlook (which is a command line utility with SVN) to get the LOG message and the author that made the commit to the repository.
Then, the next line of the script is actually calling the Python code to perform the closing of the tickets and parse the log message. I had to modify this to pass in the Log message and the author (which the usernames I use in Trac match the usernames in SVN so that was easy).
CALL THE PYTHON SCRIPT Python
"%~dp0\trac-post-commit-hook" -p
"%TRAC_ENV%" -r "%REV%" -u "%AUTHOR%"
-m "%LOG%"
The above line in the script will pass into the python script the Trac Environment, the revision, the person that made the commit, and their comment.
Here's the Python script that I used. One thing that I did additional to the regular script is we use a custom field (fixed_in_ver) which is used by our QA team to tell if the fix they're validating is in the version of code that they're testing in QA. So, I modified the code in the python script to update that field on the ticket. You can remove that code as you won't need it, but it's a good example of what you can do to update custom fields in Trac if you also want to do that.
I did that by having the users optionally include in their comment something like:
(version 2.1.2223.0)
I then use the same technique that the python script uses with regular expressions to get the information out. It wasn't too bad.
Anyway, here's the python script I used, Hopefully this is a good tutorial on exactly what I did to get it to work in the windows world so you all can leverage this in your own shop...
If you don't want to deal with my additional code for updating the custom field, get the base script from this location as mentioned by Craig above (Script From Edgewall)
#!/usr/bin/env python
# trac-post-commit-hook
# ----------------------------------------------------------------------------
# Copyright (c) 2004 Stephen Hansen
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
# ----------------------------------------------------------------------------
# This Subversion post-commit hook script is meant to interface to the
# Trac (http://www.edgewall.com/products/trac/) issue tracking/wiki/etc
# system.
#
# It should be called from the 'post-commit' script in Subversion, such as
# via:
#
# REPOS="$1"
# REV="$2"
# LOG=`/usr/bin/svnlook log -r $REV $REPOS`
# AUTHOR=`/usr/bin/svnlook author -r $REV $REPOS`
# TRAC_ENV='/somewhere/trac/project/'
# TRAC_URL='http://trac.mysite.com/project/'
#
# /usr/bin/python /usr/local/src/trac/contrib/trac-post-commit-hook \
# -p "$TRAC_ENV" \
# -r "$REV" \
# -u "$AUTHOR" \
# -m "$LOG" \
# -s "$TRAC_URL"
#
# It searches commit messages for text in the form of:
# command #1
# command #1, #2
# command #1 & #2
# command #1 and #2
#
# You can have more then one command in a message. The following commands
# are supported. There is more then one spelling for each command, to make
# this as user-friendly as possible.
#
# closes, fixes
# The specified issue numbers are closed with the contents of this
# commit message being added to it.
# references, refs, addresses, re
# The specified issue numbers are left in their current status, but
# the contents of this commit message are added to their notes.
#
# A fairly complicated example of what you can do is with a commit message
# of:
#
# Changed blah and foo to do this or that. Fixes #10 and #12, and refs #12.
#
# This will close #10 and #12, and add a note to #12.
import re
import os
import sys
import time
from trac.env import open_environment
from trac.ticket.notification import TicketNotifyEmail
from trac.ticket import Ticket
from trac.ticket.web_ui import TicketModule
# TODO: move grouped_changelog_entries to model.py
from trac.util.text import to_unicode
from trac.web.href import Href
try:
from optparse import OptionParser
except ImportError:
try:
from optik import OptionParser
except ImportError:
raise ImportError, 'Requires Python 2.3 or the Optik option parsing library.'
parser = OptionParser()
parser.add_option('-e', '--require-envelope', dest='env', default='',
help='Require commands to be enclosed in an envelope. If -e[], '
'then commands must be in the form of [closes #4]. Must '
'be two characters.')
parser.add_option('-p', '--project', dest='project',
help='Path to the Trac project.')
parser.add_option('-r', '--revision', dest='rev',
help='Repository revision number.')
parser.add_option('-u', '--user', dest='user',
help='The user who is responsible for this action')
parser.add_option('-m', '--msg', dest='msg',
help='The log message to search.')
parser.add_option('-c', '--encoding', dest='encoding',
help='The encoding used by the log message.')
parser.add_option('-s', '--siteurl', dest='url',
help='The base URL to the project\'s trac website (to which '
'/ticket/## is appended). If this is not specified, '
'the project URL from trac.ini will be used.')
(options, args) = parser.parse_args(sys.argv[1:])
if options.env:
leftEnv = '\\' + options.env[0]
rghtEnv = '\\' + options.env[1]
else:
leftEnv = ''
rghtEnv = ''
commandPattern = re.compile(leftEnv + r'(?P<action>[A-Za-z]*).?(?P<ticket>#[0-9]+(?:(?:[, &]*|[ ]?and[ ]?)#[0-9]+)*)' + rghtEnv)
ticketPattern = re.compile(r'#([0-9]*)')
versionPattern = re.compile(r"\(version[ ]+(?P<version>([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+))\)")
class CommitHook:
_supported_cmds = {'close': '_cmdClose',
'closed': '_cmdClose',
'closes': '_cmdClose',
'fix': '_cmdClose',
'fixed': '_cmdClose',
'fixes': '_cmdClose',
'addresses': '_cmdRefs',
're': '_cmdRefs',
'references': '_cmdRefs',
'refs': '_cmdRefs',
'see': '_cmdRefs'}
def __init__(self, project=options.project, author=options.user,
rev=options.rev, msg=options.msg, url=options.url,
encoding=options.encoding):
msg = to_unicode(msg, encoding)
self.author = author
self.rev = rev
self.msg = "(In [%s]) %s" % (rev, msg)
self.now = int(time.time())
self.env = open_environment(project)
if url is None:
url = self.env.config.get('project', 'url')
self.env.href = Href(url)
self.env.abs_href = Href(url)
cmdGroups = commandPattern.findall(msg)
tickets = {}
for cmd, tkts in cmdGroups:
funcname = CommitHook._supported_cmds.get(cmd.lower(), '')
if funcname:
for tkt_id in ticketPattern.findall(tkts):
func = getattr(self, funcname)
tickets.setdefault(tkt_id, []).append(func)
for tkt_id, cmds in tickets.iteritems():
try:
db = self.env.get_db_cnx()
ticket = Ticket(self.env, int(tkt_id), db)
for cmd in cmds:
cmd(ticket)
# determine sequence number...
cnum = 0
tm = TicketModule(self.env)
for change in tm.grouped_changelog_entries(ticket, db):
if change['permanent']:
cnum += 1
# get the version number from the checkin... and update the ticket with it.
version = versionPattern.search(msg)
if version != None and version.group("version") != None:
ticket['fixed_in_ver'] = version.group("version")
ticket.save_changes(self.author, self.msg, self.now, db, cnum+1)
db.commit()
tn = TicketNotifyEmail(self.env)
tn.notify(ticket, newticket=0, modtime=self.now)
except Exception, e:
# import traceback
# traceback.print_exc(file=sys.stderr)
print>>sys.stderr, 'Unexpected error while processing ticket ' \
'ID %s: %s' % (tkt_id, e)
def _cmdClose(self, ticket):
ticket['status'] = 'closed'
ticket['resolution'] = 'fixed'
def _cmdRefs(self, ticket):
pass
if __name__ == "__main__":
if len(sys.argv) < 5:
print "For usage: %s --help" % (sys.argv[0])
else:
CommitHook()
A: Post commit hooks live in the "hooks" directory where ever you have the repository living on the server side. I don't know where you have them in your environment, so this is just an example
e.g. (windows):
C:\Subversion\repositories\repo1\hooks\post-commit
e.g. (llinux/unix):
/usr/local/subversion/repositories/repo1/hooks/post-commit
A: One thing I'll add "Code Monkey's Answer is PERFECT" - is to be wary of this (my mistake)
:: Modify paths here:
:: -- this one must be set
SET TRAC_ENV=d:\trac\MySpecialProject
:: -- set if Python is not in the system path
:: SET PYTHON_PATH=**d:\python**
:: -- set to the folder containing trac/ if installed in a non-standard location
:: SET TRAC_PATH=**d:\python\Lib\site-packages\trac**
I hadn't set the Non-System paths and took me a while to see the obvious :D
Just match sure no-one else makes the same mistake! Thanks Code Monkey! 1000000000 points :D
A: First a big thanks to Code Monkey!
However, it's important to get the right python script depending on your trac version. To get the appropriate version, SVN check out the folder:
http://svn.edgewall.com/repos/trac/branches/xxx-stable/contrib
where xxx corresponds to the trac version you're using, for instance: 0.11
Otherwise you'll get a post-commit error that looks like this:
commit failed (details follow): MERGE of '/svn/project/trunk/web/directory/': 200 OK
A: For all Windows users who wants to install newest trac (0.11.5):
Follow the instructions on Trac's site named TracOnWindows.
Download 32bit 1.5 Python even if You have 64bit Windows.
note: I saw somewhere instructions how to compile trac to work natively on 64bit system.
When You install all that is required go to the repository folder. There is folder hooks.
Inside it put files Code Monkey mentioned, but dont create "trac-post-commit-resolve-ticket-ref.py" like he did. Take advice from Quant Analyst and do like he said:
"However, it's important to get the right python script depending on your trac version. To get the appropriate version, SVN check out the folder:
http://svn.edgewall.com/repos/trac/branches/xxx-stable/contrib
where xxx corresponds to the trac version you're using, for instance: 0.11"
From there downoad file "trac-post-commit-hook" and put it in hooks folder.
Edit these lines in trac-post-commit.cmd
SET PYTHON_PATH="Path to python installation folder"
SET TRAC_ENV="Path to folder where you
did tracd initenv"
Remember no last \ !!!
I have removed quotes from last line -r "%REV%" to be -r %REV% but i dont know if this is needed. This will not work now ( at least on my win 2008 server ), because hook will fail ( commit will go ok). This got to do with permissions. By default permissions are restricted and we need to allow python or svn or trac ( whatever i dont know ) to change trac information. So go to your trac folder,project folder,db folder, right click trac.db and choose properties. Go to the security tab and edit permissions to allow everyone full control. This isn't so secure but i wasted all day on this security matter and i don't want to waste another just to find for which user you should enable permissions.
Hope this helps....
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Samba, other non interactive accounts - noshell, nologin, or blank? Conducting a user account cleanup accross Solaris and Redhat linux systems, many of which have a number of Samba shares.
What preference do people have for creating the local unix accounts for non interactive Samba users? In particular, the shell entry:
*
*noshell
*nologin
*blank
And why?
JB
A: I have seen the shell set to the passwd command so that logging in only gives an opportunity to change the password. This may or may not be appropriate in your non-interactive user case, but it has the upside of allowing people to change passwords without bothering an admin.
A: I've always thought /bin/false was the standard. Some ISPs use a little menu system that lets them change their password / contact / finger info, check usages, etc. Whatever you use, you may want to add it to your /etc/shells file as well if you want the user to be able to use FTP for instance, as some services will be denied to users who are not using a shell listed in that file.
A: I usualy send all mine to /dev/null that way I don't ever have to worry about it.
I have known some people who set it to /bin/logout so that when someone logged in they were logged back out.
A: Don't do blank. That runs /bin/sh.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SSRS 2005 - Looping Through Report Parameters I would like to be able to loop through all of the defined parameters on my reports and build a display string of the parameter name and value. I'd then display the results on the report so the user knows which parameters were used for that specific execution. The only problem is that I cannot loop through the Parameters collection. There doesn't seem to be an indexer on the Parameters collection, nor does it seem to implement IEnumerable. Has anyone been able to accomplish this? I'm using SSRS 2005 and it must be implemented within the Report Code (i.e., no external assembly). Thanks!
A: Unfortunately, it looks like there's no simple way to do this.
See http://www.jameskovacs.com/blog/DiggingDeepIntoReportingServices.aspx for more info. If you look at the comments of that post, there are some ways to get around this, but they're not very elegant. The simplest solution will require you to have a list of the report parameters somewhere in your Report Code, which obviously violates the DRY principle, but if you want the simplest solution, you might just have to live with that.
You might want to rethink your constraint of no external assembly, as it looks to me that it would be much easier to do this with an external assembly. Or if your report isn't going to change much, you can create the list of parameter names and values manually.
A: If I'm understanding your question, just do what I do:
Drop a textbox on the report, then while you are setting up the report, insert the following:
="Parameter1: " + Parameters!Parameter.Label + ", Parameter2: " + Parameters!Parameter2.Label...
Granted, it's not the prettiest thing, but it does work pretty well in our app.
And I'm using Labels instead of Values since we have datetime values, and the user only cares about either the short date or the month and year (depending on circumstance), and I've already done that formatting work in setting up the parameters.
A: I can think of at least two ways to do this. The first might work, the second will definitely work.
*
*Use the web service. I'm pretty sure I saw API for getting a collection of parameters. Even if there's no direct access you can always create a standard collection and copy the ReportParameter objects from one to the other in a foreach loop - and then access Count, with individual parameter properties available by dereferencing the ReportParameter instances.
*Reports are RDL. RDL is XML. Create an XmlDocument and load the RDL file, then use the DOM to do, well, anything you like up to and including setting default values or even rewriting connection strings.
If your app won't have file-system access to the RDL files you can get them via the web service.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Adding a guideline to the editor in Visual Studio Introduction
I've always been searching for a way to make Visual Studio draw a line after a certain amount of characters.
Below is a guide to enable these so called guidelines for various versions of Visual Studio.
Visual Studio 2013 or later
Install Paul Harrington's Editor Guidelines extension.
Visual Studio 2010 and 2012
*
*Install Paul Harrington's Editor Guidelines extension for VS 2010 or VS 2012.
*Open the registry at:
VS 2010: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor
VS 2012: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\Text Editor
and add a new string called Guides with the value RGB(100,100,100), 80. The
first part specifies the color, while the other one (80) is the column the line will be displayed.
*Or install the Guidelines UI extension (which is also a part of the Productivity Power Tools), which will add entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. The current disadvantage of this method is that you can't specify the column directly.
Visual Studio 2008 and Other Versions
If you are using Visual Studio 2008 open the registry at HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor and add a new string called Guides with the value RGB(100,100,100), 80. The first part specifies the color, while the other one (80) is the column the line will be displayed. The vertical line will appear, when you restart Visual Studio.
This trick also works for various other version of Visual Studio, as long as you use the correct path:
2003: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\7.1\Text Editor
2005: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor
2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor
2008 Express: HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor
This also works in SQL Server 2005 and probably other versions.
A: Without the need to edit any registry keys, the Productivity Power Tools extension (available for all versions of visual studio) provides guideline functionality.
Once installed just right click while in the editor window and choose the add guide line option. Note that the guideline will always be placed on the column where your editing cursor is currently at, regardless of where you right click in the editor window.
To turn off go to options and find Productivity Power Tools and in that section turn off Column Guides. A reboot will be necessary.
A: This will also work in Visual Studio 2010 (Beta 2), as long as you install Paul Harrington's extension to enable the guidelines from the VSGallery or from the extension manager inside VS2010. Since this is version 10.0, you should use the following registry key:
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Text Editor
Also, Paul wrote an extension that adds entries to the editor's context menu for adding/removing the entries without needing to edit the registry directly. You can find it here: http://visualstudiogallery.msdn.microsoft.com/en-us/7f2a6727-2993-4c1d-8f58-ae24df14ea91
A: This works for SQL Server Management Studio also.
A: I found this Visual Studio 2010 extension: Indent Guides
http://visualstudiogallery.msdn.microsoft.com/e792686d-542b-474a-8c55-630980e72c30
It works just fine.
A: Visual Studio 2017 / 2019
For anyone looking for an answer for a newer version of Visual Studio, install the Editor Guidelines plugin, then right-click in the editor and select this:
Visual Studio 2022
Same author as the package above but seems he had to split the extension to work with 2022.
https://marketplace.visualstudio.com/items?itemName=PaulHarrington.EditorGuidelinesPreview&ssr=false#overview
A: With VS 2013 Express this key does not exist. What I see is HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\12.0 and there is no mention of Text Editor under that.
A: For those who use Visual Assist, vertical guidelines can be enabled from Display section in Visual Assist's options:
A: The registry path for Visual Studio 2008 is the same, but with 9.0 as the version number:
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor
A: There is now an extension for Visual Studio 2012 and 2013:
http://visualstudiogallery.msdn.microsoft.com/da227a0b-0e31-4a11-8f6b-3a149cf2e459
A: If you are a user of the free Visual Studio Express edition the right key is in
HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor
{note the VCExpress instead of VisualStudio) but it works! :)
A: For those running Visual Studio 2015 or later, the best solution is to install the Editor Guidelines by Paul Harrington rather than changing the registry yourself.
This is originally from Sara's blog.
It also works with almost any version of Visual Studio, you just need to change the "8.0" in the registry key to the appropriate version number for your version of Visual Studio.
The guide line shows up in the Output window too. (Visual Studio 2010 corrects this, and the line only shows up in the code editor window.)
You can also have the guide in multiple columns by listing more than one number after the color specifier:
RGB(230,230,230), 4, 80
Puts a white line at column 4 and column 80. This should be the value of a string value Guides in "Text Editor" key (see bellow).
Be sure to pick a line color that will be visible on your background. This color won't show up on the default background color in VS. This is the value for a light grey: RGB(221, 221, 221).
Here are the registry keys that I know of:
Visual Studio 2010: HKCU\Software\Microsoft\VisualStudio\10.0\Text Editor
Visual Studio 2008: HKCU\Software\Microsoft\VisualStudio\9.0\Text Editor
Visual Studio 2005: HKCU\Software\Microsoft\VisualStudio\8.0\Text Editor
Visual Studio 2003: HKCU\Software\Microsoft\VisualStudio\7.1\Text Editor
Productivity Power Tools includes guidelines and other useful extensions for older versions of Visual Studio.
A: For VS 2019 just use this powershell script:
Get-ChildItem "$($env:LOCALAPPDATA)\Microsoft\VisualStudio\16.0_*" |
Foreach-Object {
$dir = $_;
$regFile = "$($dir.FullName)\privateregistry.bin";
Write-Host "Loading $($dir.BaseName) from ``$regFile``"
& reg load "HKLM\_TMPVS_" "$regFile"
New-ItemProperty -Name "Guides" -Path "HKLM:\_TMPVS_\Software\Microsoft\VisualStudio\$($dir.BaseName)\Text Editor" -Value "RGB(255,0,0), 80" -force | Out-Null;
Sleep -Seconds 5; # might take some time befor the file can be unloaded
& reg unload "HKLM\_TMPVS_";
Write-Host "Unloaded $($dir.BaseName) from ``$regFile``"
}
A: You might be looking for rulers not guidelines.
Go to settings > editor > rulers > and give an array of character counts to provide lines at the specified values.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "368"
}
|
Q: Fade splash screen in and out In a C# windows forms application. I have a splash screen with some multi-threaded processes happening in the background. What I would like to do is when I display the splash screen initially, I would like to have it appear to "fade in". And then, once all the processes finish, I would like it to appear as though the splash screen is "fading out". I'm using C# and .NET 2.0. Thanks.
A: When using Opacity property have to remember that its of type double, where 1.0 is complete opacity, and 0.0 is completely transparency.
private void fadeTimer_Tick(object sender, EventArgs e)
{
this.Opacity -= 0.01;
if (this.Opacity <= 0)
{
this.Close();
}
}
A: You can use the Opacity property for the form to alter the fade (between 0.0 and 1.0).
A: While(this.Opacity !=0)
{
this.Opacity -= 0.05;
Thread.Sleep(50);//This is for the speed of the opacity... and will let the form redraw
}
A: You could use a timer to modify the Form.Opacity level.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I set up a custom build step in Visual Studio 6? Unfortunately it looks like for various reasons I'm going to have to use Visual Studio 6 instead of a newer version of VS.
It's been a long time since I've used it. I'm looking through its menus and don't see any obvious way to set up any custom build steps (pre-build, post-build, pre-link... anything would help actually).
Can anyone give me instructions on how to set up steps like this?
A: Open your project, then open the Project Settings screen (Project → Settings or ALT-F7). Alternatively, right click on a file in the FileView and select Settings.
From the Project Settings screen, go to the General tab and check "Always use custom build step". This means that the file you just chose will be an input file for a custom build step. From the "Custom Build" tab you can then give the commands to run and specify what files will be generated.
For pre-link, post-build and such, select an executable (or library) from the Project Settings screen. Then use the little arrow button to scroll to the rightmost tabs. From there you'll find the Pre-link and Post-build steps.
It's quite simple, really, I'm sure this is enough to get you started.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a single resource which explains windows memory thoroughly? Seriously, I've trawled MSDN and only got half answers - what do the columns on the Task Manager mean? Why can't I calculate the VM Usage by enumerating threads, modules, heaps &c.? How can I be sure I am accurately reporting to clients of my memory manager how much address space is left? Are their myriad collisions in the memory glossary namespace?
An online resource would be most useful in the short term, although books would be acceptable in the medium term.
A: Try the book "Windows Internals" by Mark Russinovich and I think some other guy too. It's pretty good on getting down to the nitty gritty.
A: Mark Russinovich has written the excellent book Windows Internals. A new edition that covers the Vista and Server 2008 operating systems is currently in the works with David Solomon, so you may want to pre-order that if your questions are about the new Windows operating systems instead of the old ones.
A: Here is a quick article on Windows Memory Management, which goes into sufficient depth to interpret what you're actually seeing in Task Manager or Process Explorer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to force browser to reload updated XML file? I am developing a website that relies much on XML data. The web site has an interface where user can update data. The data provided by user will be updated to the respective XML file. However, the changes is not reflected until after 1 or 2 minutes.
Anyone knows how to force the browser to load the latest XML file immediately?
A: This isn't a browser issue, it's an HTTP issue. You appear to be serving dynamic files without specifying that they shouldn't be cached. Use the Cache-Control: no-cache HTTP header to indicate this. Pragma: no-cache is the ancient HTTP 1.0 way, you can include it, but alone it is unlikely to be 100% effective.
A: You can add a random (or sequential) number to the url that you change with every update.
A: Use "Pragma: no-cache" header in HTTP response.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: .NET abstract classes I'm designing a web site navigation hierarchy. It's a tree of nodes.
Most nodes are pages. Some nodes are links (think shortcuts in Windows).
Most pages hold HTML content. Some execute code.
I'd like to represent these as this collection of classes and abstract (MustInherit) classes…
This is the database table where I'm going to store all this…
database table http://img178.imageshack.us/img178/8573/nodetablefm8.gif
Here's where I'm stumped. PageNodes may or may not be roots.
How should I handle the root class?
I don't want to have to have all four of…
*
*HtmlPageNode
*CodePageNode
*HtmlRootPageNode
*CodeRootPageNode
I want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?
Clarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)
Update: Regarding the "Root" name...
I've asked: Is there a specific name for the node that coresponds to a subtree?
A: Use the the Composite Pattern.
With regard to your root nodes, are there differences in functionality or is the difference it entirely appearance? If the difference is appearance only I suggest you have an association with a separate Style class from your PageNode.
If there are differences in functionality AND you have lots of types of page then think about using the Decorator Pattern.
A: As noted, the Composite Pattern may be a good solution.
If that doesn't work for you, it may be simpler - if appropriate - to define Root as an Interface, and apply that as needed.
Of course, that doesnt let you provide any implementation for Root...
If Root must have implementation, you can use the Decorator Pattern.
A: Actually, as the "root" node is a special case of node, maybe you need RootHtmlPageNode : HtmlPageNode.
Another idea: as you do not specify what is the difference between a "root" and normal node, maybe just a flag in node specifying if it is root or not also will be a good design.
EDIT: Per your clarification, there is no functional difference between normal and root node, so a simple flag should be enough (or property IsRootNode). If "root" node only supplies a styling data (or any other data for itself and it's children), then you can place this styling data in a separate structure/class and fetch it recursively (based on IsRootNode):
class Node
{
private bool isRootNode;
public bool IsRootNode;
private StylingData stylingData;
public StylingData StylingData
{
set
{
if (this.IsRootNode)
this.stylingData = value;
else
throw new ApplicationException("The node is not root.");
}
get
{
if (this.IsRootNode)
return this.stylingData;
else
return this.parent.StylingData;
}
}
}
This assumes, that each node has a reference to it's parent.
It become way beyond the question, as I do not know the exact design.
A:
I want the HtmlPageNode and CodePageNode classes to inherit either from PageNode or else from RootPageNode. Is that possible?
Yeah it's possible. You need to have HtmlPageNode and codePageNode have an object that will be an Abstract class that PageNode will inherit and RootPageNode too. In the constructor of HtmlPageNode and codePageNode you accept your new Abstract class that will be in your case PageNode OR RootPageNode. This way you have 2 differents classes with same methods but two differents object. Hope that help you!
A:
Clarification: There are multiple root nodes and roots may have parent nodes. Each is the root of only a sub-tree that has distinct styling. Think of different, color-coded departments. (Perhaps root is a poor name choice. Suggestions?)
Root is a poor name choice because it's (somewhat ironically) accepted as explicitly the top level of a tree structure, because the tree starts where root comes out of the ground. Any node beyond that is a branch or leaf and not directly attached to the root.
A better name would be something like IsAuthoritativeStyleNode, IsCascadingStyleNode, IsStyleParentNode or instead qualify it: e.g. IsDepartmentRootNode. Giving things clear unambiguous names is one of things that drastically improves readability / easy understanding.
You can't really achieve what you want just via abstract base classes/inheritence. As per other suggestion(s), consider interfaces instead.
I'd also consider thinking about whether you're letting the database schema drive your client side class design too much. Not saying it needs changing in this case, but it should at least be thought about. Think about how you could factor out properties into separate tables referencing the common 'Node' table, and normalize them to minimize nulls and/or duplicated identical data.
A: Should the PageNode class simply have a property of type Root?
Is that counter to the idea that a PageNode is-a Root. Or, are they not "is-a Root" because only some of them are roots?
And does that imply that the property might traverse the tree looking for the root ancestor? Or is that just me?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Using Component Object Model (COM) on non-Microsoft platforms I'm regularly running into similar situations :
I have a bunch of COM .DLLs (no IDL files) which I need to use and invoke to be able to access some foreign (non-open, non-documented) data format.
Microsoft's Visual Studio platform has very nice capabilities to import such COM DLLs and use them in my project (Visual C++'s #import directive, or picking and adding them using Visual Basic .NET's dialogs) - and that's the vendors recommended way to use them.
I would be interested into finding a way to use those DLLs on non-microsoft development platforms. Namely, using these COM classes in C++ project compiled with MinGW or Cygwin, or even Wine's GCC port to linux (compiles C++ targeting Win32 into binary running natively on Linux).
I have got some limited success using this driver, but this isn't successful in 100% of situations (I can't use COM objects returned by some methods).
Has someone had success in similar situations ?
A: The problem with the Ole/Com Object Viewer packaged with Visual Studio and Windows SDKs is that it produces a broken .IDL out of the .DLL, which can't further be compiled by MIDL into a .H/.CPP pair.
Wine's own reimplementation of OleViewer is currently unstable and crashes when trying to use those libraries.
A: Answering myself but I managed to find the perfect library for OLE/COM calling in non-Microsoft compilers : disphelper.
(it's available from sourceforge.net under a permissive BSD license).
It works both in C and C++ (and thus any other language with C bindings as well). It uses a printf/scanf-like format string syntax.
(You pass whatever you want as long as you specify it in the format string, unlike XYDispDriver which requires the arguments to exactly match whatever is specified in the type library).
I modified it a little bit to get it also compile under Linux with WineGCC (to produce native Linux elf out of Win32 code), and to handle "by ref" calls automatically (stock disthelper requires the programmer to setup his/her own VARIANT).
My patched version and patches are available as a fork on github:
*
*https://github.com/DrYak/disphelper
And here are my patches :
*
*patch for single source
*patch for split source
A: I think you should be able to use the free tool Ole/Com Object Viewer to make the header files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Q: Does an IIS worker process clear session variables when it recycles? We're writing an asp.net web app on IIS 6 and are planning on storing our user login variables in a session. Will this be removed when the worker process recycles?
A: If session is stored in-proc then YES worker process recycle will remove it. Use Out-of-proc model or sql server to store session value if you want to keep it stored.
A: If you are using the default in-memory session management, the session variables will be cleared when worker process recycles
A: yes, unless you are using out of process session state.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I use WPF bindings with RelativeSource? How do I use RelativeSource with WPF bindings and what are the different use-cases?
A: If you want to bind to another property on the object:
{Binding Path=PathToProperty, RelativeSource={RelativeSource Self}}
If you want to get a property on an ancestor:
{Binding Path=PathToProperty,
RelativeSource={RelativeSource AncestorType={x:Type typeOfAncestor}}}
If you want to get a property on the templated parent (so you can do 2 way bindings in a ControlTemplate)
{Binding Path=PathToProperty, RelativeSource={RelativeSource TemplatedParent}}
or, shorter (this only works for OneWay bindings):
{TemplateBinding Path=PathToProperty}
A: If an element is not part of the visual tree, then RelativeSource will never work.
In this case, you need to try a different technique, pioneered by Thomas Levesque.
He has the solution on his blog under [WPF] How to bind to data when the DataContext is not inherited. And it works absolutely brilliantly!
In the unlikely event that his blog is down, Appendix A contains a mirror copy of his article.
Please do not comment here, please comment directly on his blog post.
Appendix A: Mirror of blog post
The DataContext property in WPF is extremely handy, because it is automatically inherited by all children of the element where you assign it; therefore you don’t need to set it again on each element you want to bind. However, in some cases the DataContext is not accessible: it happens for elements that are not part of the visual or logical tree. It can be very difficult then to bind a property on those elements…
Let’s illustrate with a simple example: we want to display a list of products in a DataGrid. In the grid, we want to be able to show or hide the Price column, based on the value of a ShowPrice property exposed by the ViewModel. The obvious approach is to bind the Visibility of the column to the ShowPrice property:
<DataGridTextColumn Header="Price" Binding="{Binding Price}" IsReadOnly="False"
Visibility="{Binding ShowPrice,
Converter={StaticResource visibilityConverter}}"/>
Unfortunately, changing the value of ShowPrice has no effect, and the column is always visible… why? If we look at the Output window in Visual Studio, we notice the following line:
System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=ShowPrice; DataItem=null; target element is ‘DataGridTextColumn’ (HashCode=32685253); target property is ‘Visibility’ (type ‘Visibility’)
The message is rather cryptic, but the meaning is actually quite simple: WPF doesn’t know which FrameworkElement to use to get the DataContext, because the column doesn’t belong to the visual or logical tree of the DataGrid.
We can try to tweak the binding to get the desired result, for instance by setting the RelativeSource to the DataGrid itself:
<DataGridTextColumn Header="Price" Binding="{Binding Price}" IsReadOnly="False"
Visibility="{Binding DataContext.ShowPrice,
Converter={StaticResource visibilityConverter},
RelativeSource={RelativeSource FindAncestor, AncestorType=DataGrid}}"/>
Or we can add a CheckBox bound to ShowPrice, and try to bind the column visibility to the IsChecked property by specifying the element name:
<DataGridTextColumn Header="Price" Binding="{Binding Price}" IsReadOnly="False"
Visibility="{Binding IsChecked,
Converter={StaticResource visibilityConverter},
ElementName=chkShowPrice}"/>
But none of these workarounds seems to work, we always get the same result…
At this point, it seems that the only viable approach would be to change the column visibility in code-behind, which we usually prefer to avoid when using the MVVM pattern… But I’m not going to give up so soon, at least not while there are other options to consider
The solution to our problem is actually quite simple, and takes advantage of the Freezable class. The primary purpose of this class is to define objects that have a modifiable and a read-only state, but the interesting feature in our case is that Freezable objects can inherit the DataContext even when they’re not in the visual or logical tree. I don’t know the exact mechanism that enables this behavior, but we’re going to take advantage of it to make our binding work…
The idea is to create a class (I called it BindingProxy for reasons that should become obvious very soon) that inherits Freezable and declares a Data dependency property:
public class BindingProxy : Freezable
{
#region Overrides of Freezable
protected override Freezable CreateInstanceCore()
{
return new BindingProxy();
}
#endregion
public object Data
{
get { return (object)GetValue(DataProperty); }
set { SetValue(DataProperty, value); }
}
// Using a DependencyProperty as the backing store for Data. This enables animation, styling, binding, etc...
public static readonly DependencyProperty DataProperty =
DependencyProperty.Register("Data", typeof(object), typeof(BindingProxy), new UIPropertyMetadata(null));
}
We can then declare an instance of this class in the resources of the DataGrid, and bind the Data property to the current DataContext:
<DataGrid.Resources>
<local:BindingProxy x:Key="proxy" Data="{Binding}" />
</DataGrid.Resources>
The last step is to specify this BindingProxy object (easily accessible with StaticResource) as the Source for the binding:
<DataGridTextColumn Header="Price" Binding="{Binding Price}" IsReadOnly="False"
Visibility="{Binding Data.ShowPrice,
Converter={StaticResource visibilityConverter},
Source={StaticResource proxy}}"/>
Note that the binding path has been prefixed with “Data”, since the path is now relative to the BindingProxy object.
The binding now works correctly, and the column is properly shown or hidden based on the ShowPrice property.
A: Bechir Bejaoui exposes the use cases of the RelativeSources in WPF in his article here:
The RelativeSource is a markup extension that is used in particular
binding cases when we try to bind a property of a given object to
another property of the object itself, when we try to bind a property
of a object to another one of its relative parents, when binding a
dependency property value to a piece of XAML in case of custom control
development and finally in case of using a differential of a series of
a bound data. All of those situations are expressed as relative source
modes. I will expose all of those cases one by one.
*
*Mode Self:
Imagine this case, a rectangle that we want that its height is always
equal to its width, a square let's say. We can do this using the
element name
<Rectangle Fill="Red" Name="rectangle"
Height="100" Stroke="Black"
Canvas.Top="100" Canvas.Left="100"
Width="{Binding ElementName=rectangle,
Path=Height}"/>
But in this above case we are obliged to indicate the name of the
binding object, namely the rectangle. We can reach the same purpose
differently using the RelativeSource
<Rectangle Fill="Red" Height="100"
Stroke="Black"
Width="{Binding RelativeSource={RelativeSource Self},
Path=Height}"/>
For that case we are not obliged to mention the name of the binding
object and the Width will be always equal to the Height whenever the
height is changed.
If you want to parameter the Width to be the half of the height then
you can do this by adding a converter to the Binding markup extension.
Let's imagine another case now:
<TextBlock Width="{Binding RelativeSource={RelativeSource Self},
Path=Parent.ActualWidth}"/>
The above case is used to tie a given property of a given element to
one of its direct parent ones as this element holds a property that is
called Parent. This leads us to another relative source mode which is
the FindAncestor one.
*
*Mode FindAncestor
In this case, a property of a given element will be tied to one of its
parents, Of Corse. The main difference with the above case is the fact
that, it's up to you to determine the ancestor type and the ancestor
rank in the hierarchy to tie the property. By the way try to play with
this piece of XAML
<Canvas Name="Parent0">
<Border Name="Parent1"
Width="{Binding RelativeSource={RelativeSource Self},
Path=Parent.ActualWidth}"
Height="{Binding RelativeSource={RelativeSource Self},
Path=Parent.ActualHeight}">
<Canvas Name="Parent2">
<Border Name="Parent3"
Width="{Binding RelativeSource={RelativeSource Self},
Path=Parent.ActualWidth}"
Height="{Binding RelativeSource={RelativeSource Self},
Path=Parent.ActualHeight}">
<Canvas Name="Parent4">
<TextBlock FontSize="16"
Margin="5" Text="Display the name of the ancestor"/>
<TextBlock FontSize="16"
Margin="50"
Text="{Binding RelativeSource={RelativeSource
FindAncestor,
AncestorType={x:Type Border},
AncestorLevel=2},Path=Name}"
Width="200"/>
</Canvas>
</Border>
</Canvas>
</Border>
</Canvas>
The above situation is of two TextBlock elements those are embedded
within a series of borders and canvas elements those represent their
hierarchical parents. The second TextBlock will display the name of
the given parent at the relative source level.
So try to change AncestorLevel=2 to AncestorLevel=1 and see what
happens. Then try to change the type of the ancestor from
AncestorType=Border to AncestorType=Canvas and see what's happens.
The displayed text will change according to the Ancestor type and
level. Then what's happen if the ancestor level is not suitable to the
ancestor type? This is a good question, I know that you're about to
ask it. The response is no exceptions will be thrown and nothings will
be displayed at the TextBlock level.
*
*TemplatedParent
This mode enables tie a given ControlTemplate property to a property
of the control that the ControlTemplate is applied to. To well
understand the issue here is an example bellow
<Window.Resources>
<ControlTemplate x:Key="template">
<Canvas>
<Canvas.RenderTransform>
<RotateTransform Angle="20"/>
</Canvas.RenderTransform>
<Ellipse Height="100" Width="150"
Fill="{Binding
RelativeSource={RelativeSource TemplatedParent},
Path=Background}">
</Ellipse>
<ContentPresenter Margin="35"
Content="{Binding RelativeSource={RelativeSource
TemplatedParent},Path=Content}"/>
</Canvas>
</ControlTemplate>
</Window.Resources>
<Canvas Name="Parent0">
<Button Margin="50"
Template="{StaticResource template}" Height="0"
Canvas.Left="0" Canvas.Top="0" Width="0">
<TextBlock FontSize="22">Click me</TextBlock>
</Button>
</Canvas>
If I want to apply the properties of a given control to its control
template then I can use the TemplatedParent mode. There is also a
similar one to this markup extension which is the TemplateBinding
which is a kind of short hand of the first one, but the
TemplateBinding is evaluated at compile time at the contrast of the
TemplatedParent which is evaluated just after the first run time. As
you can remark in the bellow figure, the background and the content
are applied from within the button to the control template.
A: In WPF RelativeSource binding exposes three properties to set:
1. Mode: This is an enum that could have four values:
a. PreviousData(value=0): It assigns the previous value of the property to
the bound one
b. TemplatedParent(value=1): This is used when defining the templates of
any control and want to bind to a value/Property of the control.
For example, define ControlTemplate:
<ControlTemplate>
<CheckBox IsChecked="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Value, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" />
</ControlTemplate>
c. Self(value=2): When we want to bind from a self or a property of self.
For example: Send checked state of checkbox as CommandParameter while setting the Command on CheckBox
<CheckBox ...... CommandParameter="{Binding RelativeSource={RelativeSource Self},Path=IsChecked}" />
d. FindAncestor(value=3): When want to bind from a parent control
in Visual Tree.
For example: Bind a checkbox in records if a grid,if header checkbox is checked
<CheckBox IsChecked="{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type iDP:XamDataGrid}}, Path=DataContext.IsHeaderChecked, Mode=TwoWay}" />
2. AncestorType: when mode is FindAncestor then define what type of ancestor
RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type iDP:XamDataGrid}}
3. AncestorLevel: when mode is FindAncestor then what level of ancestor (if there are two same type of parent in visual tree)
RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type iDP:XamDataGrid, AncestorLevel=1}}
Above are all use-cases for RelativeSource binding.
Here is a reference link.
A: Don't forget TemplatedParent:
<Binding RelativeSource="{RelativeSource TemplatedParent}"/>
or
{Binding RelativeSource={RelativeSource TemplatedParent}}
A: I am constantly updating my research on Binding.
Original Here
DataContext
DataContext is the DependencyProperty included in the FrameworkElement.
PresentationFramework.dll
namespace System.Windows
{
public class FrameworkElement : UIElement
{
public static readonly DependencyProperty DataContextProperty;
public object DataContext { get; set; }
}
}
And, all UI Controls in WPF inherit the FrameworkElement class.
At this point in learning Binding or DataContext, you don't have to study FrameworkElement in greater depth.
However, this is to briefly mention the fact that the closest object that can encompass all UI Controls is the FrameworkElement.
DataContext is always the reference point for Binding.
Binding can directly recall values for the DataContext type format starting with the nearest DataContext.
<TextBlock Text="{Binding}" DataContext="James"/>
The value bound to Text="{Binding}" is passed directly from the nearest DataContext, TextBlock.
Therefore, the Binding result value of Text is 'James'.
*
*Type integer
When assigning a value to DataContext directly from Xaml, resource definitions are required first for value types such as Integer and Boolean.
Because all strings are recognized as String.
1. Using System mscrolib in Xaml
Simple type variable type is not supported by standard.
You can define it with any word, but mostly use sys words.
xmlns:sys="clr-namespace:System;assembly=mscorlib"
2. Create YEAR resource key in xaml
Declare the value of the type you want to create in the form of a StaticResource.
<Window.Resources>
<sys:Int32 x:Key="YEAR">2020</sys:Int32>
</Window.Resources>
...
<TextBlock Text="{Binding}" DataContext="{StaticResource YEAR"/>
*All type of value
There are very few cases where Value Type is binding directly into DataContext.
Because we're going to bind an object.
<Window.Resources>
<sys:Boolean x:Key="IsEnabled">true</sys:Boolean>
<sys:double x:Key="Price">7.77</sys:double>
</Window.Resources>
...
<StackPanel>
<TextBlock Text="{Binding}" DataContext="{StaticResource IsEnabled}"/>
<TextBlock Text="{Binding}" DataContext="{StaticResource Price}"/>
</StackPanel>
*Another type
Not only String but also various types are possible. Because DataContext is an object type.
Finally...
In using Binding at WPF, most developers are not fully aware of the existence, function and importance of DataContext.
It may mean that Binding is being connected by luck.
Especially if you are responsible for or participating in a large WPF project, you should understand the DataContext hierarchy of the application more clearly. In addition, the introduction of WPF's various popular MVVM Framework systems without this DataContext concept will create even greater limitations in implementing functions freely.
Binding
*
*DataContext Binding
*Element Binding
*MultiBinding
*Self Property Binding
*Find Ancestor Binding
*TemplatedParent Binding
*Static Property Binding
DataContext Binding
string property
<TextBox Text="{Binding Keywords}"/>
Element Binding
<CheckBox x:Name="usingEmail"/>
<TextBlock Text="{Binding ElementName=usingEmail, Path=IsChecked}"/>
MultiBinding
<TextBlock Margin="5,2" Text="This disappears as the control gets focus...">
<TextBlock.Visibility>
<MultiBinding Converter="{StaticResource TextInputToVisibilityConverter}">
<Binding ElementName="txtUserEntry2" Path="Text.IsEmpty" />
<Binding ElementName="txtUserEntry2" Path="IsFocused" />
</MultiBinding>
</TextBlock.Visibility>
</TextBlock>
### Self Property Binding
<TextBlock x:Name="txt" Text="{Binding ElementName=txt, Path=Tag}"/>
If you have to bind your own property, you can use Self Property Binding, instead of using Element Binding.
You no longer have to declare x:Name to bind your own property.
<TextBlock Text="{Binding RelativeSource={RelativeSource Self}, Path=Tag}"/>
### Find Ancestor Binding
Imports based on the parent control closest to it.
<TextBlock Text="{Binding RelativeSource={RelativeSource AncestorType=Window}, Path=Title}"/>
In addition to the properties of the controls found, the properties within the DataContext object can be used if it exists.
<TextBlock Text="{Binding RelativeSource={RelativeSource AncestorType=Window}, Path=DataContext.Email}"/>
TemplatedParent Binding
This is a method that can be used within ControlTemplate, and you can import the control that is the owner of the ControlTemplate.
<Style TargetType="Button">
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="Button">
<TextBlock Text="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Content}"/>
</ControlTemplate>
</Setter.Value>
</Setter>
You can access to all Property and DataContext.
<TextBlock Text="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=Content}"/>
Static Property Binding
You can access binding property value directly.
1. Declare static property.
namespace Exam
{
public class ExamClass
{
public static string ExamText { get; set; }
}
}
2. Using static class in XAML.
<Window ... xmlns:exam="clr-namespace:Exam">
3. Binding property.
<TextBlock Text="{Binding exam:ExamClass.ExamText}"/>
Or, you can set Resource key like using Converter.
<Window.Resource>
<cvt:VisibilityToBooleanConverter x:Key="VisibilityToBooleanConverter"/>
<exam:ExamClass x:Key="ExamClass">
</Window.Resource>
...
<TextBlock Text="{Binding Source={StaticResource ExamClass}, Path=ExamText}"/>
I have never used the Static Property under normal circumstances. This is because data that deviates from its own DataContext can disrupt the flow of whole WPF applications and impair readability significantly. However, this method is actively used in the development stage to implement fast testing and functions, as well as in the DataContext (or ViewModel).
Bad Binding & Good Binding
✔️ If the property you want to bind is included in Datacontext, you don't have to use ElementBinding.
Using ElementBinding through connected control is not a functional problem,
but it breaks the fundamental pattern of Binding.
Bad Binding
<TextBox x:Name="text" Text="{Binding UserName}"/>
...
<TextBlock Text="{Binding ElementName=text, Path=Text}"/>
Good Binding
<TextBox Text="{Binding UserName}"/>
...
<TextBlock Text="{Binding UserName}"/>
✔️ Do not use ElementBinding when using property belonging to higher layers control.
Bad Binding
<Window x:Name="win">
<TextBlock Text="{Binding ElementName=win, Path=DataContext.UserName}"/>
...
Good Binding
<Window>
<TextBlock Text="{Binding RelativeSource={RelativeSource AncestorType=Window}, Path=DataContext.UserName}"/>
...
Great!
<Window>
<TextBlock DataContext="{Binding RelativeSource={RelativeSource AncestorType=Window}, Path=DataContext}"
Text="{Binding UserName}"/>
...
✔️ Do not use ElementBinding when using your own properties.
Bad Binding
<TextBlock x:Name="txt" Text="{Binding ElementName=txt, Path=Foreground}"/>
Good Binding
<TextBlock Text="{Binding RelativeSource={RelativeSource Self}, Path=Foreground}"/>
A: It's worthy of note that for those stumbling across this thinking of Silverlight:
Silverlight offers a reduced subset only, of these commands
A: I created a library to simplify the binding syntax of WPF including making it easier to use RelativeSource. Here are some examples. Before:
{Binding Path=PathToProperty, RelativeSource={RelativeSource Self}}
{Binding Path=PathToProperty, RelativeSource={RelativeSource AncestorType={x:Type typeOfAncestor}}}
{Binding Path=PathToProperty, RelativeSource={RelativeSource TemplatedParent}}
{Binding Path=Text, ElementName=MyTextBox}
After:
{BindTo PathToProperty}
{BindTo Ancestor.typeOfAncestor.PathToProperty}
{BindTo Template.PathToProperty}
{BindTo #MyTextBox.Text}
Here is an example of how method binding is simplified. Before:
// C# code
private ICommand _saveCommand;
public ICommand SaveCommand {
get {
if (_saveCommand == null) {
_saveCommand = new RelayCommand(x => this.SaveObject());
}
return _saveCommand;
}
}
private void SaveObject() {
// do something
}
// XAML
{Binding Path=SaveCommand}
After:
// C# code
private void SaveObject() {
// do something
}
// XAML
{BindTo SaveObject()}
You can find the library here: http://www.simplygoodcode.com/2012/08/simpler-wpf-binding.html
Note in the 'BEFORE' example that I use for method binding that code was already optimized by using RelayCommand which last I checked is not a native part of WPF. Without that the 'BEFORE' example would have been even longer.
A: Some useful bits and pieces:
Here's how to do it mostly in code:
Binding b = new Binding();
b.RelativeSource = new RelativeSource(RelativeSourceMode.FindAncestor, this.GetType(), 1);
b.Path = new PropertyPath("MyElementThatNeedsBinding");
MyLabel.SetBinding(ContentProperty, b);
I largely copied this from Binding Relative Source in code Behind.
Also, the MSDN page is pretty good as far as examples go: RelativeSource Class
A: Binding RelativeSource={
RelativeSource Mode=FindAncestor, AncestorType={x:Type ItemType}
}
...
The default attribute of RelativeSource is the Mode property. A complete set of valid values is given here (from MSDN):
*
*PreviousData Allows you to bind the previous data item (not that control that contains the data item) in the list of data items being displayed.
*TemplatedParent Refers to the element to which the template (in which the data-bound element exists) is applied. This is similar to setting a TemplateBindingExtension and is only applicable if the Binding is within a template.
*Self Refers to the element on which you are setting the binding and allows you to bind one property of that element to another property on the same element.
*FindAncestor Refers to the ancestor in the parent chain of the data-bound element. You can use this to bind to an ancestor of a specific type or its subclasses. This is the mode you use if you want to specify AncestorType and/or AncestorLevel.
A: Here's a more visual explanation in the context of a MVVM architecture:
A: I just posted another solution for accessing the DataContext of a parent element in Silverlight that works for me. It uses Binding ElementName.
A: This is an example of the use of this pattern that worked for me on empty datagrids.
<Style.Triggers>
<DataTrigger Binding="{Binding Items.Count, RelativeSource={RelativeSource Self}}" Value="0">
<Setter Property="Background">
<Setter.Value>
<VisualBrush Stretch="None">
<VisualBrush.Visual>
<TextBlock Text="We did't find any matching records for your search..." FontSize="16" FontWeight="SemiBold" Foreground="LightCoral"/>
</VisualBrush.Visual>
</VisualBrush>
</Setter.Value>
</Setter>
</DataTrigger>
</Style.Triggers>
A: I didn't read every answer, but I just want to add this information in case of relative source command binding of a button.
When you use a relative source with Mode=FindAncestor, the binding must be like:
Command="{Binding Path=DataContext.CommandProperty, RelativeSource={...}}"
If you don't add DataContext in your path, at execution time it can't retrieve the property.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "651"
}
|
Q: What can you use to get an application to be able to receive SMS message?
*
*Do you need to use some kind of provider?
*Can you setup your own SMS server?
*Does any open source solutions exist?
I am an SMS newbie so any insight on how this is accomplished would be great. I am partial to Java but any language is fine.
A: I used kannel on a linux box with an old mobile phone connected via a serial cable to the box. Got a pre-paid card in the phone as I was using it for private use only. Worked like a charm!
A: You might take a look at Gammu if you're running on a Linux box:
http://www.gammu.org
Using Gammu, you can configure it to periodically poll a mobile phone for new SMS messages. When Gammu finds new messages, it can store them in an SQL database. You can then write another program to periodically poll the database and take action on new messages.
Using this general setup I successfully deployed a homemade 2-way SMS application. I configured Gammu to pull messages off of the phone over Bluetooth. Gammu placed them in a MySQL database, which I had a Tomcat web application periodically poll for new messages. When a new message was found, the system processed the message.
This is a somewhat "duct-tape and bailing wire" setup, but it worked quite well and was more reliable than many of the "professional" SMS gateways I tested beforehand. YMMV.
A: We've used mBlox (http://www.mblox) in the past, as they provide comprehensive international coverage, premium SMS, various levels of Quality of Service vs Price, and a solid Java-based API for both inbound and outbound SMS.
A: You will need an SMS gateway, googling "SMS gateway" will reveal many. I have used http://www.clickatell.com/products/sms_gateway.php with great success.
I do not know of any open source implementations, but will be monitoring this thread in case someone else does!
A: This is easy. Yes, you need a "sms gateway" provider. There are a lot out there. These companies provide APIs for you to send/receive SMS.
e.g. the German company Mobilant provides an easy API. If you want to receive a SMS just program a simple PHP / JSP / s.th.else dynamic web page and let Mobilant call it.
e.g.
*
*Mobilant receives a SMS for you
*Mobilant calls your web page http://yourpage.com/receive.php?message=...
*You do what you need to do
You really don't want to setup your own SMS Server or Center ;-) This is really expensive, takes months to setup and costs some nice ferraris.
Use a provider and pay per SMS. It's the cheapest and fastest way.
A: I'm not up with Java, so here's a nice guide on how to do it in Ruby on Rails: http://www.lukeredpath.co.uk/2007/8/29/sending-sms-messages-from-your-rails-application
If you want to send 'true' SMS you'll need to use an SMS gateway, (use of one is outlined in the above guide).
You can use MMS to send messages, to an email address that looks something like 1234567890@ messages.whatever.com. You can use mail functions to do this. There's some information about that here: http://contentdeveloper.com/2006/06/open-source-sms-text-messaging-application/
A: First, you need an SMS gateway. Take a look at Kannel SMS Gateway.
A: Agreed with Kannel. You can set it up on a LAMP server with a GSM modem too.
A: TextMarks provides a service where they map an incoming SMS to them to an HTTP GET to a URL you provide and then send the response back as another SMS. They don't charge you if you let them add some advertising to the reply SMS. The problem is they don't provide this for free anymore for T-Mobile due to T-Mobile charging them. I'd be willing to pay per message, but they charge $0.20 per user-month, which is rather steep. Anyone know of anyone who provides this service?
A: You actually don't need an SMS gateway; nearly every cell phone can send/receive SMS messages to/from any email address. I built an SMS service (http://www.txtreg.net) using Nearly Free Speech's ability to forward email to a URL as a POST request. User sends a text to an email address, PHP script processes it, and sends an email right back to their phone.
A: Try SMS Enabler software. To receive SMS messages it uses a 3G/4G/GSM USB modem connected to a pc. It can forward incoming messages to a URL over HTTP, or store them in a database table, or write them to a CSV file, in real-time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: calling thread.start() within its own constructor is it legal for a thread to call this.start() inside its own constructor? and if so what potential issues can this cause? I understand that the object wont have fully initialized until the constructor has run to completion but aside from this are there any other issues?
A: You will also see wierd issues if the Thread class is ever further subclassed. In that case, you'll end up with the thread running already once the super() exits, and anything the subclass might do in its constructor could be invalid.
@bill barksdale
If the thread is already running, calling start again gets you an IllegalThreadStateException, you don't get 2 threads.
A: I assume that you want to do this to make your code less verbose; instead of saying
Thread t = new CustomThread();
t.start();
activeThreads.add(t);
you can just say
activeThreads.add( new CustomThread() );
I also like having less verbosity, but I agree with the other respondents that you shouldn't do this. Specifically, it breaks the convention; anyone familiar with Java who reads the second example will assume that the thread has not been started. Worse yet, if they write their own threading code which interacts in some way with yours, then some threads will need to call start and others won't.
This may not seem compelling when you're working by yourself, but eventually you'll have to work with other people, and it's good to develop good coding habits so that you'll have an easy time working with others and code written with the standard conventions.
However, if you don't care about the conventions and hate the extra verbosity, then go ahead; this won't cause any problems, even if you try to call start multiple times by mistake.
A: By the way, if one wants lower verbosity and still keep the constructor with its "standard" semantics, one could create a factory method:
activeThreads.add( CustomThread.newStartedThread() );
A: For memory-safety reasons, you shouldn't expose a reference to an object or that object's fields to another thread from within its constructor. Assuming that your custom thread has instance variables, by starting it from within the constructor, you are guaranteed to violate the Java Memory Model guidelines. See Brian Goetz's Safe Construction Techniques for more info.
A: It's legal, but not wise. The Thread part of the instance will be completely initialised, but your constructor may not. There is very little reason to extend Thread, and to pull tricks like this isn't going to help your code.
A: It is "legal", but I think the most important issue is this:
A class should do one thing and do it well.
If your class uses a thread internally, then the existence of that thread should not be visible in the public API. This allows improvement without affecting the public API. Solution: extend Runnable, not Thread.
If your class provides general functionality which, in this case, happens to run in a thread, then you don't want to limit yourself to always creating a thread. Same solution here: extend Runnable, not Thread.
For less verbosity I second the suggestion to use a factory method (e.g. Foo.createAndRunInThread()).
A: Legal ... yes (with caveats as mentioned elsewhere). Advisable ... no.
I's just a smell you can only too easily avoid. If you want your thread to auto-start, just do it like Heinz Kabutz.
public class ThreadCreationTest {
public static void main(String[] args) throws InterruptedException {
final AtomicInteger threads_created = new AtomicInteger(0);
while (true) {
final CountDownLatch latch = new CountDownLatch(1);
new Thread() {
{ start(); } // <--- Like this ... sweet and simple.
public void run() {
latch.countDown();
synchronized (this) {
System.out.println("threads created: " +
threads_created.incrementAndGet());
try {
wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
};
latch.await();
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is there any tool which can generate a report for a valid C program Is there any tool that can parse a valid C program and generate a report which contains list of functions, global variables, #define constants, local variables in each function, etc.
A: Doxygen does all of the above.
A: Try exuberant-ctags with the -x option and tell it to generate all of its kinds.
Exuberant CTAGS is the default ctags on many linux distros.
You might try: exuberant-ctags -x --c-kinds=cdefglmnpstuvx --language-force=c filename
will even work if filename doesn't have .c extension.
You can use exuberant-ctags --list-kinds=c to see the possible tags.
Under windows, the cygwin environment supports ctags. I'm not sure if there's a windows build that doesn't need cygwin.
A: There are a few tools, depending on what you want to do. I'm not sure what you mean by "report", things like lxr will do html etc. cross referenced links. But for a person to use to help understand some code, then ncc or cscope (the later of which is in most Linux distributions) also some of the IDEs have some of these features (like eclipse).
Older alternatives to cscope are ctags and etags.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting a full list of the URLS in a rails application How do I get a a complete list of all the urls that my rails application could generate?
I don't want the routes that I get get form rake routes, instead I want to get the actul URLs corrosponding to all the dynmically generated pages in my application...
Is this even possible?
(Background: I'm doing this because I want a complete list of URLs for some load testing I want to do, which has to cover the entire breadth of the application)
A: I was able to produce useful output with the following command:
$ wget --spider -r -nv -nd -np http://localhost:3209/ 2>&1 | ack -o '(?<=URL:)\S+'
http://localhost:3209/
http://localhost:3209/robots.txt
http://localhost:3209/agenda/2008/08
http://localhost:3209/agenda/2008/10
http://localhost:3209/agenda/2008/09/01
http://localhost:3209/agenda/2008/09/02
http://localhost:3209/agenda/2008/09/03
^C
A quick reference of the wget arguments:
# --spider don't download anything.
# -r, --recursive specify recursive download.
# -nv, --no-verbose turn off verboseness, without being quiet.
# -nd, --no-directories don't create directories.
# -np, --no-parent don't ascend to the parent directory.
About ack
ack is like grep but use perl regexps, which are more complete/powerful.
-o tells ack to only output the matched substring, and the pattern I used looks for anything non-space preceded by 'URL:'
A: You could pretty quickly hack together a program that grabs the output of rake routes and then parses the output to put together a list of the URLs.
What I have, typically, done for load testing is to use a tool like WebLOAD and script several different types of user sessions (or different routes users can take). Then I create a mix of user sessions and run them through the website to get something close to an accurate picture of how the site might run.
Typically I will also do this on a total of 4 different machines running about 80 concurrent user sessions to realistically simulate what will be happening through the application. This also makes sure I don't spend overly much time optimizing infrequently visited pages and can, instead, concentrate on overall application performance along the critical paths.
A: Check out the Spider Integration Tests written By Courtnay Gasking
http://pronetos.googlecode.com/svn/trunk/vendor/plugins/spider_test/doc/classes/Caboose/SpiderIntegrator.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Code/Document Management for a very small company I work for a very small company (~5 employees, 2.5 coders). We have gotten away with no code or document management for several years, but it's starting to catch up with us as we grow a bit.
Any suggestions for a management system. Free is better, but cheap is acceptable. We just don't want to spend more time on installation/configuration than it is going to save us.
We use mostly VC++ 6, but we're branching into VC# 2008. Also, we need to keep track of mechanical drawings and circuit diagrams for several pieces of hardware, as well as user manuals for both hardware and software (but I don't really expect to find one tool that will do all of this, just hoping).
A: Subversion (SVN) is an excellent option for you. It's free, integrates nicely into Windows with TortoiseSVN, and is well-tolerated by users.
We are using it for source code, as well as for document management.
A: http://trac.edgewall.org/ - might be a bit hard to install but otherwise is very good if coupled with svn repository
A: Mantis is good for issue tracking. Subversion for source control. Both are free.
For documents, I do not know. Sounds like you would do fine with a network share.
A: You may want to look at Trac.
A: I work for a similar sized company, and when I got here I was in the same place as you. I implemented SVN/Subversion http://subversion.tigris.org/ quite easily. If you use the svn protocol and use svnserve (can be setup as a windows service that auto starts on your server) it should take you 1.5-3 hours to setup depending on how much you want to read http://svnbook.red-bean.com/, see collabnet http://www.collab.net/downloads/subversion/ for the Windows package download
Using Windows, you can use Tortoise SVN which integrates into the windows shell. There is also a new release of Ankh SVN (2.0) http://ankhsvn.open.collab.net/ that integrates into Visual Studio. Ankh is very nice (has pending changes window, kind of similar to Subclipse like functionality) but it is a new release and is somewhat buggy (we have experienced some memory probs and slowness). We currently use both Tortoise for initial checkouts or imports and Ankh for everything else and are pretty happy.
If you have any Mac users, there are a lot of options out there. We have a mac user here who uses Versions http://www.versionsapp.com/, though it sounds like they will charge for it once they get out of beta.
I would recommend SVN because it is widely used out there and I feel that is important with open source projects you are going to use daily for production purposes. Just to spell it out, everything (other than Versions) mentioned is free.
A: Perforce!
It's extremely fast compared to most other source control systems. It works great remotely. (SSH tunnels, in my case)
The VS plugins are quite decent... I haven't tried the Eclipse one that much yet.
If you can get by with two users with 5 workspaces each, then you can use it for free. (I do, currently)
If that won't work, then it does cost a bit... something like $800/user I believe. Sometime next year I'm probably paying that. (5 workspaces is tough when you work on several machines with VMs)
Still, I heard the slower-than-glacial ClearCase/ClearQuest system one client one mine is using was something like $10k per developer, so expensive where source control is concerned is a relative concept.
Don't skimp on the source control, man! Slow source control is a serious pain in the a$$.
Avoid SourceSafe-like systems that only version files... use systems that track tasks or change sets. It's very useful to see what all belongs together as a task. Tags are not an acceptable substitute.
Also, the journalling nature of Perforce makes backups and recovery a lot easier.
A: Use Git for source control, Basecamp/Pivotal Tracker/Unfuddled for coding workflow, and Sharepoint/Google Docs for document management.
A: If you get a MSDN developer license, you can run TFS workgroup edition. That has source control and document management rolled all up in one package that's pretty easy to use and manage. That, in addition to an internal wiki, is what my company does.
A: Use Subversion. It's free and is the preferred source control system for the vast majority of open source projects.
SVN uses shallow copies, so when you have large files in a repository and you branch, a full file copy isn't done... just a pointer to the original. As for text files (code) only diffs are stored.
Use TortoiseSVN for windows explorer integration.
TFS is a pig, and you'd need to open visual studio to interact with source explorer. Stupid for a CAD engineer to have to need a license to TFS for that.
For document management, just use Windows Sharepoint Services that comes with Windows Server 2003 (or 2008).
A: I also work for a small company and we mainly develop in .NET languages. We have decided to use Visual SourceSafe for source control, despite its questionable reputation, since it integrates nicely with Visual Studio. VSS works very well for us, and we have not experienced any serious problems with it. Also, we host a SharePoint server, which we use to store documents like coding standards, storyboards, and even our SCRUM log.
A: We use HostingPlayground. For $6 per month we get multiple Subversion repositories and an instance of Trac. Can't beat it. And since its a service its available immediately.
A: It seems the solution for your 'management' requirements will require at least a tool or set of tools in the following categories: (sorry about the links, not enough reputation to put proper ones in the reply)
*
*Source Code Management
*Trouble/Bug Ticketing
*Document Management
Definitely take a look at stackoverflow.com/questions/15024/tools-to-help-a-small-shop-score-higher-on-the-joel-test Tools to help a small shop score higher on the joel test referenced by stackoverflow.com/questions/84303/code-document-management-for-a-very-small-company/84363#84363 Kristopher
Each have various free/open source solutions, and likewise there are commercial solutions.
Source Code Management (SCM)
A significant trend(?) of source code management is evolving from centralised code management with something like TFS(?), cvs or subversion.tigris.org svn), to decentralised 'distributed' source code management with tools such as www.selenic.com/mercurial/wiki/ or git-scm.com/. Some of the tools either integrate into continutation
The above mentioned source code management tools all have nice ms windows integration tools, and some even have closer Visual Studio integration (e.g. TFS, ankhsvn.open.collab.net/ ANKH svn mentioned by Mario).
A simplistic generalistion would recommend git/mercurial when your coding involves a good portion of time away/off disconnected from your centralised source code repository (such as doing a lot of coding from home when your repository is not accessible through the Internet.)
Wikipedia has a en.wikipedia.org/wiki/Source_code_management nice overview of the various issues related to source code management, and the benefits of various options.
If you haven't used scm before, just pick one or two of the tools that fits your groups requirements and test it. Of course, if you know someone near who has experience with a particular scm solution it may help with the team's learning curve to have that shared experience around.
My pick for your scenario: Subversion with ankhsvn.open.collab.net Ankh SVN for Visual Studio integration.
Trouble/Bug Ticketing
None of the tools available solve everything for everybody, each have their advantages and most require some compromise from a development teams existing modus operandi. Again, wikipedia is your friend with a en.wikipedia.org/wiki/Bug_tracker general summary and en.wikipedia.org/wiki/Comparison_of_issue_tracking_systems comparison of major tools.
Installation
The php based tools are the easiest (in my experience) to get up and running, and the perl tools more involved(?) Of course there's python one that's real easy to install, but then requires a better mind than mine to configure.
My pick for your scenario: trac.edgewall.org/ Trac
Trac is an enhanced wiki and issue tracking system for software development projects. Trac uses a minimalistic approach to web-based software project management. Our mission is to help developers write great software while staying out of the way. Trac should impose as little as possible on a team's established development process and policies.
It provides an interface to Subversion (or other version control systems), an integrated Wiki and convenient reporting facilities.
Trac allows wiki markup in issue descriptions and commit messages, creating links and seamless references between bugs, tasks, changesets, files and wiki pages. A timeline shows all current and past project events in order, making the acquisition of an overview of the project and tracking progress very easy. The roadmap shows the road ahead, listing the upcoming milestones.
Drawings/Document Management
If you use Subversion with Trac then much of your document management may be solved with these tools. Otherwise another stackoverflow discussion topic: stackoverflow.com/questions/587481/developer-documentation-sharepoint-document-management-vs-screwturn-wiki Developer documentation sharepoint document management vs. screwturn wiki, for Windows centric environment, is a good read.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do indicate the SQL default library in an IBM iSeries 2 connection string to an AS/400? I'm connecting to an AS/400 stored procedure layer using the IBM iSeries Access for Windows package. This provides a .NET DLL with classes similar to those in the System.Data namespace. As such we use their implementation of the connection class and provide it with a connection string.
Does anyone know how I can amend the connection string to indicate the default library it should use?
A: If you are connecting through .NET:
Provider=IBMDA400;Data Source=as400.com;User Id=user;Password=password;Default Collection=yourLibrary;
Default Collection is the parameter that sets the library where your programs should start executing.
And if you are connecting through ODBC from Windows (like setting up a driver in the control panel):
DRIVER=Client Access ODBC Driver(32-bit);SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary
In this case LibraryList is the parameter to set, remember this is for ODBC connection.
There are two drivers from IBM to connect to the AS400, the older one uses the above connection string, if you have the newest version of the client software from IBM called "System i Access for Windows" then you should use this connection string:
DRIVER=iSeries Access ODBC Driver;SYSTEM=as400.com;EXTCOLINFO=1;UID=user;PWD=password;LibraryList=yourLibrary
The last is pretty much the same, only the DRIVER parameter value changes.
If you are using this in a .NET application don't forget to add the providerName parameter to your XML tag and define the API used for connecting which would be OleDb in this case:
providerName="System.Data.OleDb"
A: Snippet from some Delphi source code using the Client Access Express Driver. Probably not exactly what you are looking for, but it may help others that stumble upon this post. The DBQ part is the default library, and the System part is the AS400/DB2 host name.
ConnectionString :=
'Driver={Client Access ODBC Driver (32-bit)};' +
'System=' + System + ';' +
'DBQ=' + Lib + ';' +
'TRANSLATE=1;' +
'CMT=0;' +
//'DESC=Client Access Express ODBC data source;' +
'QAQQINILIB=;' +
'PKG=QGPL/DEFAULT(IBM),2,0,1,0,512;' +
'SORTTABLE=;' +
'LANGUAGEID=ENU;' +
'XLATEDLL=;' +
'DFTPKGLIB=QGPL;';
A: Are you using the Catalog Library List parameter for OLE DB? This is what my connection string typically looks like:
<add name="AS400ConnectionString" connectionString="Data Source=DEVL820;Initial Catalog=Q1A_DATABASE_SRVR;Persist Security Info=False;User ID=BLAH;Password=BLAHBLAH;Provider=IBMDASQL.DataSource.1;**Catalog Library List="HTSUTST, HTEUSRJ, HTEDTA"**" providerName="System.Data.OleDb" />
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What are the perldoc perlxxx options? It appears that using perldoc perl gives the list of, e.g. perlre, perlvar, etc.
Is this the best place to find the list of what's available as an overview or tutorial or reference manual section? Is there another, better list?
A: perldoc perltoc
is a bit more verbose about the various documentation files. If you want a list of core modules, try
perldoc perlmodlib
A: See also best online source to learn perl
Specifically for perldoc, you can also view the content online which might be easier on the eyes: perldoc online
A: I think "perldoc perltoc" is too verbose for just finding the list of "perlxxx" subjects. Instead use "perldoc perl". Or http://perldoc.perl.org/perl.html, which is the online version.
A: A more friendly starting place, with links to just a select few of the other perlxxx pages, is perldoc perlintro.
A: That should be a good start. You can also find the same information online: http://perldoc.perl.org/perl.html.
I'm not sure what you mean by better. There's a ton of good information at http://www.perl.com/ as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to protect Mac OS X software from cracking? We're releasing a Mac version of our Windows application. Under Windows, there are several tools for executable protection, for example Armadillo, ExeCryptor, AsProtect etc, however, none of these has a Mac version. So, my question is:
Are there any executable protection / encryption tools for Mac OS X?
A: It is my personal view and most other OS X developer's view that you should make it reasonably hard to steal your software but there is a point that it's simply not worth the effort. The fact is that there are very few things that can be done to fully protect a piece of software. And the more you do to try and protect that software the harder you make it for a real user to use your software. Real users then hate to use your software because they lost 5 days of productivity since their dongle broke. And less people buy it because the other real users have spread the word how the heavy handed protection scheme isn't worth it.
Will Shipley, a prominent Mac OS X developer has written one of his infamous opinion pieces here: http://wilshipley.com/blog/2005/06/piracy.html.
A: This might be useful: Using OpenSSL for license keys
A: AquaticPrime is an open source licensing framework that's based on asymmetric key encryption and is decently hard to crack.
A: UPX can encrypt/compress Mac OSX executable.
A: I'm a maker of PELock software copy protection for Windows and I must say in my entire life I have received like 2 requests for making a copy protection for MacOS... Once I was looking for some encryption tools for MacOS executables and didn't find anything (except huge licensing solutions that doesn't protect the executables), maybe it's a great market niche for the new products, but from my perspective it's... well not worth the effort (I'm a jerk, I know :D), but maybe since x86 is now default platform, people who coded software protection will take a shot (Rafael [themida], Pavol [svkp], Alexey [asprotect] do you read this? ;)) :)
A: Speaking frankly (re: niko, really) it seems silly to worry too much about copy protection for the mac platform. There's a mindset involved and mac users are generally speaking less likely to attempt to download/torrent illegally than PC users. While encryption and keygens are usually considered far enough to go, you could in an extreme situation, look at PACE's iLOK/interLok copy protection w/ usb key. I think that's a stupidly extreme solution though, and tends to frustrate adopters (see Amarra for details).
the balance between user experience and developer protection is unfortunately not a great one, on mac or pc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: MySql - Create Table If Not Exists Else Truncate? Here is the updated question:
the current query is doing something like:
$sql1 = "TRUNCATE TABLE fubar";
$sql2 = "CREATE TEMPORARY TABLE IF NOT EXISTS fubar SELECT id, name FROM barfu";
The first time the method containing this is run, it generates an error message on the truncate since the table doesn't exist yet.
Is my only option to do the CREATE TABLE, run the TRUNCATE TABLE, and then fill the table? (3 separate queries)
original question was:
I've been having a hard time trying to figure out if the following is possible in MySql without having to write block sql:
CREATE TABLE fubar IF NOT EXISTS ELSE TRUNCATE TABLE fubar
If I run truncate separately before the create table, and the table doesn't exist, then I get an error message. I'm trying to eliminate that error message without having to add any more queries.
This code will be executed using PHP.
A: You could do the truncate after the 'create if not exists'.
That way it will always exist... and always be empty at that point.
CREATE TABLE fubar IF NOT EXISTS
TRUNCATE TABLE fubar
A: execute any query if table exists.
Usage: call Edit_table(database-name,table-name,query-string);
*
*Procedure will check for existence of table-name under database-name and will execute query-string if it exists.
Following is the stored procedure:
DELIMITER $$
DROP PROCEDURE IF EXISTS `Edit_table` $$
CREATE PROCEDURE `Edit_table` (in_db_nm varchar(20), in_tbl_nm varchar(20), in_your_query varchar(200))
DETERMINISTIC
BEGIN
DECLARE var_table_count INT;
select count(*) INTO @var_table_count from information_schema.TABLES where TABLE_NAME=in_tbl_nm and TABLE_SCHEMA=in_db_nm;
IF (@var_table_count > 0) THEN
SET @in_your_query = in_your_query;
#SELECT @in_your_query;
PREPARE my_query FROM @in_your_query;
EXECUTE my_query;
ELSE
select "Table Not Found";
END IF;
END $$
DELIMITER ;
More on Mysql
A: shmuel613, it would be better to update your original question rather than replying. It's best if there's a single place containing the complete question rather than having it spread out in a discussion.
Ben's answer is reasonable, except he seems to have a 'not' where he doesn't want one. Dropping the table only if it doesn't exist isn't quite right.
You will indeed need multiple statements. Either conditionally create then populate:
*
*CREATE TEMPORARY TABLE IF NOT EXISTS fubar ( id int, name varchar(80) )
*TRUNCATE TABLE fubar
*INSERT INTO fubar SELECT * FROM barfu
or just drop and recreate
*
*DROP TABLE IF EXISTS fubar
*CREATE TEMPORARY TABLE fubar SELECT id, name FROM barfu
With pure SQL those are your two real classes of solutions. I like the second better.
(With a stored procedure you could reduce it to a single statement. Something like: TruncateAndPopulate(fubar) But by the time you write the code for TruncateAndPopulate() you'll spend more time than just using the SQL above.)
A: how about:
DROP TABLE IF EXISTS fubar;
CREATE TABLE fubar;
Or did you mean you just want to do it with a single query?
A: OK then, not bad. To be more specific, the current query is doing something like:
$sql1 = "TRUNCATE TABLE fubar";
$sql2 = "CREATE TEMPORARY TABLE IF NOT EXISTS fubar SELECT id, name FROM barfu";
The first time the method containing this is run, it generates an error message on the truncate since the table doesn't exist yet.
Is my only option to do the "CREATE TABLE", run the "TRUNCATE TABLE", and then fill the table? (3 separate queries)
PS - thanks for responding so quickly!
A: If you're using PHP, use mysql_list_tables to check that the table exists before TRUNCATE it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Is there a macro to conditionally copy rows to another worksheet? Is there a macro or a way to conditionally copy rows from one worksheet to another in Excel 2003?
I'm pulling a list of data from SharePoint via a web query into a blank worksheet in Excel, and then I want to copy the rows for a particular month to a particular worksheet (for example, all July data from a SharePoint worksheet to the Jul worksheet, all June data from a SharePoint worksheet to Jun worksheet, etc.).
Sample data
Date - Project - ID - Engineer
8/2/08 - XYZ - T0908-5555 - JS
9/4/08 - ABC - T0908-6666 - DF
9/5/08 - ZZZ - T0908-7777 - TS
It's not a one-off exercise. I'm trying to put together a dashboard that my boss can pull the latest data from SharePoint and see the monthly results, so it needs to be able to do it all the time and organize it cleanly.
A: This works: The way it's set up I called it from the immediate pane, but you can easily create a sub() that will call MoveData once for each month, then just invoke the sub.
You may want to add logic to sort your monthly data after it's all been copied
Public Sub MoveData(MonthNumber As Integer, SheetName As String)
Dim sharePoint As Worksheet
Dim Month As Worksheet
Dim spRange As Range
Dim cell As Range
Set sharePoint = Sheets("Sharepoint")
Set Month = Sheets(SheetName)
Set spRange = sharePoint.Range("A2")
Set spRange = sharePoint.Range("A2:" & spRange.End(xlDown).Address)
For Each cell In spRange
If Format(cell.Value, "MM") = MonthNumber Then
copyRowTo sharePoint.Range(cell.Row & ":" & cell.Row), Month
End If
Next cell
End Sub
Sub copyRowTo(rng As Range, ws As Worksheet)
Dim newRange As Range
Set newRange = ws.Range("A1")
If newRange.Offset(1).Value <> "" Then
Set newRange = newRange.End(xlDown).Offset(1)
Else
Set newRange = newRange.Offset(1)
End If
rng.Copy
newRange.PasteSpecial (xlPasteAll)
End Sub
A: Here's another solution that uses some of VBA's built in date functions and stores all the date data in an array for comparison, which may give better performance if you get a lot of data:
Public Sub MoveData(MonthNum As Integer, FromSheet As Worksheet, ToSheet As Worksheet)
Const DateCol = "A" 'column where dates are store
Const DestCol = "A" 'destination column where dates are stored. We use this column to find the last populated row in ToSheet
Const FirstRow = 2 'first row where date data is stored
'Copy range of values to Dates array
Dates = FromSheet.Range(DateCol & CStr(FirstRow) & ":" & DateCol & CStr(FromSheet.Range(DateCol & CStr(FromSheet.Rows.Count)).End(xlUp).Row)).Value
Dim i As Integer
For i = LBound(Dates) To UBound(Dates)
If IsDate(Dates(i, 1)) Then
If Month(CDate(Dates(i, 1))) = MonthNum Then
Dim CurrRow As Long
'get the current row number in the worksheet
CurrRow = FirstRow + i - 1
Dim DestRow As Long
'get the destination row
DestRow = ToSheet.Range(DestCol & CStr(ToSheet.Rows.Count)).End(xlUp).Row + 1
'copy row CurrRow in FromSheet to row DestRow in ToSheet
FromSheet.Range(CStr(CurrRow) & ":" & CStr(CurrRow)).Copy ToSheet.Range(DestCol & CStr(DestRow))
End If
End If
Next i
End Sub
A: This is partially pseudocode, but you will want something like:
rows = ActiveSheet.UsedRange.Rows
n = 0
while n <= rows
if ActiveSheet.Rows(n).Cells(DateColumnOrdinal).Value > '8/1/08' AND < '8/30/08' then
ActiveSheet.Rows(n).CopyTo(DestinationSheet)
endif
n = n + 1
wend
A: The way I would do this manually is:
*
*Use Data - AutoFilter
*Apply a custom filter based on a date range
*Copy the filtered data to the relevant month sheet
*Repeat for every month
Listed below is code to do this process via VBA.
It has the advantage of handling monthly sections of data rather than individual rows. Which can result in quicker processing for larger sets of data.
Sub SeperateData()
Dim vMonthText As Variant
Dim ExcelLastCell As Range
Dim intMonth As Integer
vMonthText = Array("January", "February", "March", "April", "May", _
"June", "July", "August", "September", "October", "November", "December")
ThisWorkbook.Worksheets("Sharepoint").Select
Range("A1").Select
RowCount = ThisWorkbook.Worksheets("Sharepoint").UsedRange.Rows.Count
'Forces excel to determine the last cell, Usually only done on save
Set ExcelLastCell = ThisWorkbook.Worksheets("Sharepoint"). _
Cells.SpecialCells(xlLastCell)
'Determines the last cell with data in it
Selection.EntireColumn.Insert
Range("A1").FormulaR1C1 = "Month No."
Range("A2").FormulaR1C1 = "=MONTH(RC[1])"
Range("A2").Select
Selection.Copy
Range("A3:A" & ExcelLastCell.Row).Select
ActiveSheet.Paste
Application.CutCopyMode = False
Calculate
'Insert a helper column to determine the month number for the date
For intMonth = 1 To 12
Range("A1").CurrentRegion.Select
Selection.AutoFilter Field:=1, Criteria1:="" & intMonth
Selection.Copy
ThisWorkbook.Worksheets("" & vMonthText(intMonth - 1)).Select
Range("A1").Select
ActiveSheet.Paste
Columns("A:A").Delete Shift:=xlToLeft
Cells.Select
Cells.EntireColumn.AutoFit
Range("A1").Select
ThisWorkbook.Worksheets("Sharepoint").Select
Range("A1").Select
Application.CutCopyMode = False
Next intMonth
'Filter the data to a particular month
'Convert the month number to text
'Copy the filtered data to the month sheet
'Delete the helper column
'Repeat for each month
Selection.AutoFilter
Columns("A:A").Delete Shift:=xlToLeft
'Get rid of the auto-filter and delete the helper column
End Sub
A: If this is just a one-off exercise, as an easier alternative, you could apply filters to your source data, and then copy and paste the filtered rows into your new worksheet?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What can prevent an MS Access 2000 form from closing? My Access 2000 DB causes me problems - sometimes (haven't pinpointed the cause) the "book" form won't close. Clicking its close button does nothing, File -> Close does nothing, even closing Access results in no action. I don't have an OnClose handler for this form. The only workaround I can find involves opening the Vba editor, making a change to the code for that form (even adding a space and then immediately deleting the space), and then going back to close the "book" form, closing it, and saying "no, I don't want to save the changes". Only then will it close. Any help?
A: Here's a forum post describing, I think, the same problem you face. Excerpt belows states a workaround.
What I do is to put code on the close button that reassigns the sourceobject
of any subforms to a blank form, such as:
me!subParts.sourceobject = "subBlank" 'subBlank is my form that is
totally blank, free of code and controls, etc.
docmd.close acForm, "fParts", acSaveNo
The above 2 lines is the only way I've found to prevent the Access prompt
from popping up.
http://bytes.com/forum/thread681889.html
A: Another alternative is
(Me.Checkbox)
or my preferred syntax:
(Me!Checkbox)
It seems to me that there is much confusion in the posts in this topic. The answer that was chosen by the original poster cites an article where the user had a prompt to save design changes to the form, but the problem described here seems like it's a failure of the form to close, not a save issue (the save issue came up only in the workaround describing going to the VBE and making a code change).
I wonder if the original user might have incorrect VBE options set? If you open the VBE and go to TOOLS | OPTIONS, on the GENERAL tab, you'll see several choices about error handling. BREAK ON UNHANDLED ERRORS or BREAK IN CLASS MODULE should be chosen, but it's important to recognize that if you use the former, you may not see certain kinds of errors.
There's not really enough detail to diagnose much more, other than the fact that the reference to the checkbox control seemed to have been causing the problem, but there are a number of Access coding best practices that can help you avoid some of these oddities. The code-related recommendations in Tony Toews's Best Practices page are a good place to start.
A: That sure is weird. Do you have any timer controls on the form? If you do, try disabling it in the OnClose.
A: There is a possibility that the message box that asks if you want to save changes is being displayed behind the form. I believe that this message box is modal so you must click yes or no before you can do anything with the form which is why you can't close it.
A: Does your form have an unload event? That can be canceled, and if it is, the form won't close when it's in form view. It will only close in design view, which, when you edit the vba code is what the form does in the Access window when you're editing the code.
A: Does your form have a checkbox, toggle button or option button? There's a bug in Access 2000 where Access won't close if you test the value without explicitly using the Value property in the vba code, like this:
If Me.chkbox Then
versus:
If Me.chkbox.Value Then
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to implement in-process full text search engine In one of our commercial applications (Win32, written in Delphi) we'd like to implement full text search. The application is storing user data in some kind of binary format that is not directly recognizable as a text.
Ideally, I'd like to find either an in-process solution (DLL would be OK) or a local server that I could access via TCP (preferably). The API should allow me to submit a textual information to the server (along with the metadata representing the binary blob it came from) and, of course, it should allow me to do a full-text search with at least minimal support for logical operators and substring searching. Unicode support is required.
I found extensive list of search engines on Stack Overflow (What are some Search Servers out there?) but I don't really understand which of those engines could satisfy my needs. I thought of asking The Collective for opinion before I spend a day or two testing each of them.
Any suggestions?
A: There are a number of options on the market. Either fully fledge commercial products or open source variants. Your choice of a search provider is very dependent on the customers you are targetting.
Microsoft has a free Express version of their Search Server. As far as I know the Express edition is limited to running the Application Tier on one server.
There is also the Apache Lucene project which is open source. It has a nice API that's easy to use and a large community of users. The original project is based on Java, but there are also other implementations such as NLucene for .NET that I have used personally.
A: I'd recommend having a look at SQLite -- full-text search is included in the latest version.
A: I suppose the answer depends on your db. For example SQL Server has full text search and also English Language Queries if ever needed.
A: Take a look at using PostgreSQL and tsearch.
A: Try using postgresql with tsearch
A: Sphinx is probably the most efficient and scalable option while SQLite - FTS3 is the most straightforward option.
A: While not in-process, Solr is very fast (based on Lucene) and easily accessible from any platform (HTTP)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why learn Perl, Python, Ruby if the company is using C++, C# or Java as the application language? I wonder why would a C++, C#, Java developer want to learn a dynamic language?
Assuming the company won't switch its main development language from C++/C#/Java to a dynamic one what use is there for a dynamic language?
What helper tasks can be done by the dynamic languages faster or better after only a few days of learning than with the static language that you have been using for several years?
Update
After seeing the first few responses it is clear that there are two issues.
My main interest would be something that is justifiable to the employer as an expense.
That is, I am looking for justifications for the employer to finance the learning of a dynamic language. Aside from the obvious that the employee will have broader view, the
employers are usually looking for some "real" benefit.
A: I primarily program in Java and C# but use dynamic languages (ruby/perl) to support smoother deployment, kicking off OS tasks, automated reporting, some log parsing, etc.
After a short time learning and experimenting with ruby or perl you should be able to write some regex manipulating scripts that can alter data formats or grab information from logs. An example of a small ruby/perl script that could be written quickly would be a script to parse a very large log file and report out only a few events of interest in either a human readable format or a csv format.
Also, having experience with a variety of different programming languages should help you think of new ways to tackle problems in more structured languages like Java, C++, and C#.
A: A lot of times some quick task comes up that isn't part of the main software you are developing. Sometimes the task is one off ie compare this file to the database and let me know the differences. It is a lot easier to do text parsing in Perl/Ruby/Python than it is in Java or C# (partially because it is a lot easier to use regular expressions). It will probably take a lot less time to parse the text file using Perl/Ruby/Python (or maybe even vbscript cringe and then load it into the database than it would to create a Java/C# program to do it or to do it by hand.
Also, due to the ease at which most of the dynamic languages parse text, they are great for code generation. Sure your final project must be in C#/Java/Transact SQL but instead of cutting and pasting 100 times, finding errors, and cutting and pasting another 100 times it is often (but not always) easier just to use a code generator.
A recent example at work is we needed to get data from one accounting system into our accounting system. The system has an import format, but the old system had a completely different format (fixed width although some things had to be matched). The task is not to create a program to migrate the data over and over again. It is to shove the data into our system and then maintain it there going forward. So even though we are a C# and SQL Server shop, I used Python to convert the data into the format that could be imported by our application. Ultimately it doesn't matter that I used python, it matters that the data is in the system. My boss was pretty impressed.
Where I often see the dynamic languages used for is testing. It is much easier to create a Python/Perl/Ruby program to link to a web service and throw some data against it than it is to create the equivalent Java program. You can also use python to hit against command line programs, generate a ton of garbage (but still valid) test data, etc.. quite easily.
The other thing that dynamic languages are big on is code generation. Creating the C#/C++/Java code. Some examples follow:
The first code generation task I often see is people using dynamic languages to maintain constants in the system. Instead of hand coding a bunch of enums, a dynamic language can be used to fairly easily parse a text file and create the Java/C# code with the enums.
SQL is a whole other ball game but often you get better performance by cut and pasting 100 times instead of trying to do a function (due to caching of execution plans or putting complicated logic in a function causing you to go row by row instead of in a set). In fact it is quite useful to use the table definition to create certain stored procedures automatically.
It is always better to get buy in for a code generator. But even if you don't, is it more fun to spend time cutting/pasting or is it more fun to create a Perl/Python/Ruby script once and then have that generate the code? If it takes you hours to hand code something but less time to create a code generator, then even if you use it once you have saved time and hence money. If it takes you longer to create a code generator than it takes to hand code once but you know you will have to update the code more than once, it may still make sense. If it takes you 2 hours to hand code, 4 hours to do the generator but you know you'll have to hand code equivalent work another 5 or 6 times than it is obviously better to create the generator.
Also some things are easier with dynamic languages than Java/C#/C/C++. In particular regular expressions come to mind. If you start using regular expressions in Perl and realize their value, you may suddenly start making use of the Java regular expression library if you haven't before. If you have then there may be something else.
I will leave you with one last example of a task that would have been great for a dynamic language. My work mate had to take a directory full of files and burn them to various cd's for various customers. There were a few customers but a lot of files and you had to look in them to see what they were. He did this task by hand....A Java/C# program would have saved time, but for one time and with all the development overhead it isn't worth it. However slapping something together in Perl/Python/Ruby probably would have been worth it. He spent several hours doing it. It would have taken less than one to create the Python script to inspect each file, match which customer it goes to, and then move the file to the appropriate place.....Again, not part of the standard job. But the task came up as a one off. Is it better to do it yourself, spend the larger amount of time to make Java/C# do the task, or spend a much smaller amount of time doing it in Python/Perl/Ruby. If you are using C or C++ the point is even more dramatic due to the extra concerns of programming in C or C++ (pointers, no array bounds checking, etc.).
A: One big reason to learn Perl or Ruby is to help you automate any complicated tasks that you have to do over and over.
Or if you have to analyse contents of log files and you need more mungeing than available using grep, sed, etc.
Also using other languages, e.g. Ruby, that don't have much "setup cost" will let you quickly prototype ideas before implementing them in C++, Java, etc.
HTH
cheers,
Rob
A: Do you expect to work for this company forever? If you're ever out on the job market, pehaps some prospective employers will be aware of the Python paradox.
A: A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.
- Wayne Gretzky
Our industry is always changing. No language can be mainstream forever. To me Java, C++, .Net is where the puck is right now. And python, ruby, perl is where the puck is going to be. Decide for yourself if you wanna be good or great!
A: Paul Graham posted an article several years ago about why Python programmers made better Java programmers. (http://www.paulgraham.com/pypar.html)
Basically, regardless of whether the new language is relevant to the company's current methodology, learning a new language means learning new ideas. Someone who is willing to learn a language that isn't considered "business class" means that he is interested in programming, beyond just earning a paycheck.
To quote Paul's site:
And people don't learn Python because
it will get them a job; they learn it
because they genuinely like to program
and aren't satisfied with the
languages they already know.
Which makes them exactly the kind of
programmers companies should want to
hire. Hence what, for lack of a better
name, I'll call the Python paradox: if
a company chooses to write its
software in a comparatively esoteric
language, they'll be able to hire
better programmers, because they'll
attract only those who cared enough to
learn it. And for programmers the
paradox is even more pronounced: the
language to learn, if you want to get
a good job, is a language that people
don't learn merely to get a job.
If an employer was willing to pay for the cost of learning a new language, chances are the people who volunteered to learn (assuming it wasn't a mandatory class) would be the same people to are already on the "fast track".
A: When I first learned Python, I worked for a Java shop. Occasionally I'd have to do serious text-processing tasks which were much easier to do with quick Python scripts than Java programs. For example, if I had to parse a complex CSV file and figure out which of its rows corresponded to rows in our Oracle database, this was much easier to do with Python than Java.
More than that, I found that learning Python made me a much better Java programmer; having learned many of the same concepts in another language I feel that I understand those concepts much better. And as for what makes Python easier than Java, you might check out this question: Java -> Python?
A: Edit: I wrote this before reading the update to the original question. See my other answer for a better answer to the updated question. I will leave this as is as a warning against being the fastest gun in the west =)
Over a decade ago, when I was learning the ways of the Computer, the Old Wise Men With Beards explained how C and C++ are the tools of the industry. No one used Pascal and only the foolhardy would risk their companies with assembler.
And of course, no one would even mention the awful slow ugly thing called Java. It will not be a tool for serious business.
So. Um. Replace the languages in the above story and perhaps you can predict the future. Perhaps you can't. Point is, Java will not be the Last Programming Language ever and also you will most likely switch employers as well. The future is charging at you 24 hours per day. Be prepared.
Learning new languages is good for you. Also, in some cases it can give you bragging rights for a long time. My first university course was in Scheme. So when people talk to me about the new language du jour, my response is something like "First-class functions? That's so last century."
And of course, you get more stuff done with a high-level language.
A: Let me turn your question on its head by asking what use it is to an American English speaker to learn another language?
The languages we speak (and those we program in) inform the way we think. This can happen on a fundamental level, such as c++ versus javascript versus lisp, or on an implementation level, in which a ruby construct provides a eureka moment for a solution in your "real job."
Speaking of your real job, if the market goes south and your employer decides to "right size" you, how do you think you'll stack up against a guy who is flexible because he's written software in tens of languages, instead of your limited exposure? All things being equal, I think the answer is clear.
Finally, you program for a living because you love programming... right?
A: Learning a new language is a long-term process. In a couple of days you'll learn the basics, yes. But! As you probably know, the real practical applicability of any language is tied to the standard library and other available components. Learning how to use the efficiently requires a lot of hands-on experience.
Perhaps the only immediate short-term benefit is that developers learn to distinguish the nails that need a Python/Perl/Ruby -hammer. And, if they are any good, they can then study some more (online, perhaps!) and become real experts.
The long-term benefits are easier to imagine:
*
*The employee becomes a better developer. Better developer => better quality. We are living in a knowledge economy these days. It's wiser to invest in those brains that already work for you.
*It is easier to adapt when the next big language emerges. It is very likely that the NBL will have many of the features present in today's scripting languages: first-class functions, closures, streams/generators, etc.
*New market possibilities and ability to respond more quickly. Even if you are not writing Python, other people are. Your clients? Another vendor in the project? Perhaps a critical component was written in some other language? It will cost money and time, if you do not have people who can understand the code and interface with it.
*Recruitment. If your company has a reputation of teaching new and interesting stuff to people, it will be easier to recruit the top people. Everyone is doing Java/C#/C++. It is not a very effective way to differentiate yourself in the job market.
A: Towards answering the updated question, its a chicken/egg problem. The best way to justify an expense is to show how it reduces a cost somewhere else, so you may need to spend some extra/personal time to learn something first to build some kind of functional prototype.
Show your boss a demo like "hey, i did this thing, and it saves me this much time [or better yet, this much $$], imagine if everyone could use this how much money we would save"
and then after they agree, explain how it is some other technology and that it is worth the expense to get more training, and training for others on how to do it better.
A: I don't think anyone has mentioned this yet. Learning a new language can be fun! Surely that's a good enough reason to try something new.
A: I have often found that learning another language, especially a dynamically typed language, can teach you things about other languages and make you an overall better programmer. Learning ruby, for example, will teach you Object Oriented programming in ways Java wont, and vice versa. All in all, I believe that it is better to be a well rounded programmer than stuck in a single language. It makes you more valuable to the companies/clients you work for.
A: check out the answers to this thead:
https://stackoverflow.com/questions/76364/what-is-the-single-most-effective-thing-you-did-to-improve-your-programming-ski#84112
Learning new languages is about keeping an open mind and learning new ways of doing things.
A: Im not sure if this is what you are looking for, but we write our main application with Java at the small company I work for, but have used python to write smaller scripts quickly. Backup software, temporary scripts to manipulate data and push out results. It just seems easier sometimes to sit down with python and write a quick script than mess with classes and stuff in java.
Temp scripts that aren't going to stick around don't need a lot of design time wasted on them.
And I am lazy, but it is good to just learn as much as you can of course and see what features exist in other languages. Knowing more never hurts you in future career changes :)
A: It's all about broadening your horizons as a developer. If you limit yourself to only strong-typed languages, you may not end up the best programmer you could.
As for tasks, Python/Lua/Ruby/Perl are great for small simple tasks, like finding some files and renaming them. They also work great when paired with a framework (e.g. Rails, Django, Lua for Windows) for developing simple apps quickly. Hell, 37Signals is based on creating simple yet very useful apps in Ruby on Rails.
A: They're useful for the "Quick Hack" that is for plugging a gap in your main language for a quick (and potentially dirty) fix faster than it would take to develop the same in your main language. An example: a simple script in perl to go through a large text file and replace all instances of an email address with another is trivial with an amount of time taken in the 10 minute range. Hacking a console app together to do the same in your main language would take multiples of that.
You also have the benefit that exposing yourself to additional languages broadens your abilities and learning to attack problems from a different languages perspective can be as valuable as the language itself.
Finally, scripting languages are very useful in the realm of extension. Take LUA as an example. You can bolt a lua interpreter into your app with very little overhead and you now have a way to create rich scripting functionality that can be exposed to end users or altered and distributed quickly without requiring a rebuild of the entire app. This is used to great effect in many games most notably World of Warcraft.
A: Personally I work on a Java app, but I couldn't get by without perl for some supporting scripts.
I've got scripts to quickly flip what db I'm pointing at, scripts to run build scripts, scripts to scrape data & compare stuff.
Sure I could do all that with java, or maybe shell scripts (I've got some of those too), but who wants to compile a class (making sure the classpath is set right etc) when you just need something quick and dirty. Knowing a scripting language can remove 90% of those boring/repetitive manual tasks.
A: Learning something with a flexible OOP system, like Lisp or Perl (see Moose), will allow you to better expand and understand your thoughts on software engineering. Ideally, every language has some unique facet (whether it be CLOS or some other technique) that enhances, extends and grows your abilities as a programmer.
A: If all you have is a hammer, every problem begins to look like a nail.
There are times when having a screwdriver or pair of pliers makes a complicated problem trivial.
Nobody asks contractors, carpenters, etc, "Why learn to use a screwdriver if i already have a hammer?". Really good contractors/carpenters have tons of tools and know how to use them well. All programmers should be doing the same thing, learning to use new tools and use them well.
But before we use any power tools, lets
take a moment to talk about shop safety. Be sure
to read, understand, and follow all the
safety rules that come with your power
tools. Doing so will greatly reduce
the risk of personal injury. And remember
this: there is no more important rule
than to wear these: safety glasses
-- Norm
A: I think the main benefits of dynamic languages can be boiled down to
*
*Rapid development
*Glue
The short design-code-test cycle time makes dynamic languages ideal for prototyping, tools, and quick & dirty one-off scripts. IMHO, the latter two can make a huge impact on a programmer's productivity. It amazes me how many people trudge through things manually instead of whipping up a tool to do it for them. I think it's because they don't have something like Perl in their toolbox.
The ability to interface with just about anything (other programs or languages, databases, etc.) makes it easy to reuse existing work and automate tasks that would otherwise need to be done manually.
A: Given the increasing focus to running dynamic languages (da-vinci vm etc.) on the JVM and the increasing number of dynamic languages that do run on it (JRuby, Grrovy, Jython) I think the usecases are just increasing. Some of the scenarios I found really benifited are
*
*Prototyping- use RoR or Grails to build quick prototypes with advantage of being able to runn it on the standard app server and (maybe) reuse existing services etc.
*Testing- right unit tests much much faster in dynamic languages
*Performance/automation test scripting- some of these tools are starting to allow the use standard dynamic language of choice to write the test scripts instead of proprietary script languages. Side benefit might be to the able to reuse some unit test code you've already written.
A: Don't tell your employer that you want to learn Ruby. Tell him you want to learn about the state-of-the-art in web framework technologies. it just happens that the hottest ones are Django and Ruby on Rails.
A: I have found the more that I play with Ruby, the better I understand C#.
1) As you switch between these languages that each of them has their own constructs and philosophies behind the problems that they try to solve. This will help you when finding the right tool for the job or the domain of a problem.
2) The role of the compiler (or interpreter for some languages) becomes more prominent. Why is Ruby's type system differ from the .Net/C# system? What problems do each of these solve? You'll find yourself understanding at a lower level the constructs of the compiler and its influence on the language
3) Switching between Ruby and C# really helped me to understand Design Patterns better. I really suggest implementing common design patterns in a language like C# and then in a language like Ruby. It often helped me see through some of the compiler ceremony to the philosophy of a particular pattern.
4) A different community. C#, Java, Ruby, Python, etc all have different communities that can help engage your abilities. It is a great way to take your craft to the next level.
5) Last, but not least, because new languages are fun :)
A: Philosophical issues aside, I know that I have gotten value from writing quick-and-dirty Ruby scripts to solve brute-force problems that Java was just too big for. Last year I had three separate directory structures that were all more-or-less the same, but with lots of differences among the files (the client hadn't heard of version control and I'll leave the rest to your imagination).
It would have taken a great deal of overhead to write an analyzer in Java, but in Ruby I had one working in about 40 minutes.
A: Often, dynamc languages (especially python and lua) are embedded in programs to add a more plugin-like functionality and because they are high-level languages that make it easy to add certain behavior, where a low/mid-level language is not needed.
Lua specificially lacks all the low-level system calls because it was designed for easeof-use to add functionality within the program, not as a general programming language.
A: You should also consider learning a functional programming language like Scala. It has many of the advantages of Ruby, including a concise syntax, and powerful features like closures. But it compiles to Java class files and and integrate seamlessly into a Java stack, which may make it much easier for your employer to swallow.
Scala isn't dynamically typed, but its "implicit conversion" feature gives many, perhaps even all of the benefits of dynamic typing, while retaining many of the advantages of static typing.
A: Dynamic languages are fantastic for prototyping ideas. Often for performance reasons they won't work for permanent solutions or products. But, with languages like Python, which allow you to embed standard C/C++/Java inside them or visa versa, you can speed up the really critical bits but leave it glued together with the flexibility of a dynamic language.
...and so you get the best of both worlds. If you need to justify this in terms of why more people should learn these languages, just point out much faster you can develop the same software and how much more robust the solution is (because debugging/fixing problems in dynamic languages is in my experience, considerably easier!).
A: Knowing grep and ruby made it possible to narrow down a problem, and verify the fix for, an issue involving tons of java exceptions on some production servers. Because I threw the solution together in ruby, it was done (designed, implemented, tested, run, bug-fixed, re-run, enhanced, results analyzed) in an afternoon instead of a couple of days. I could have solved the same problem using an all-java solution or a C# solution, but it most likely would have taken me longer.
Having dynamic language expertise also sometimes leads you to simpler solutions in less dynamic languages. In ruby, perl or python, you just intuitively reach for associative arrays (hashes, dictionaries, whatever word you want to use) for the smallest things, where you might be tempted to create a complex class hierarchy in a statically typed language when the problem doesn't necessarily demand it.
Plus you can plug in most scripting languages into most runtimes. So it doesn't have to be either/or.
A: The "real benefit" that an employer could see is a better programmer who can implement solutions faster; however, you will not be able to provide any hard numbers to justify the expense and an employer will most likely have you work on what makes money now as opposed to having you work on things that make the future better.
The only time you can get training on the employer's dime, is when they perceive a need for it and it's cheaper than hiring a new person who already has that skill-set.
A: Testing.
It's often quicker and easier to test your C#/Java application by using a dynamic language. You can do exploratory testing at the interactive prompt and quickly create automated test scripts.
A: Others have already explained why learning more languages makes you a better programmer.
As for convincing your boss it's worth it, this is probably just your company's culture. Some places make career and skill progress a policy (move up or out), some places value it but leave it up to the employee's initiative, and some places are very focused on the bottom line.
If you have to explain why learning a language is a good thing to your boss, my advice would be to stay at work only as long as necessary, then go home and study new things on your own.
A: For after work work, for freelance jobs...:) and final to be programming literate as possible as...;)
A: Dynamic languages are a different way to think and sometimes the practices you learn from a dynamic or functional language can transfer to the more statically typed languages but if you never take the time to learn different languages, you'll never get the benefit of having a knew way to think when you are coding.
A: Don't bother your employer, spend ~$40 on a book, download some software, and devote some time each day to read/do exercises. In no time you'll be trained :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
}
|
Q: How do I prepend a directory the library path when loading a core file in gdb on Linux I have a core file generated on a remote system that I don't have direct access to. I also have local copies of the library files from the remote system, and the executable file for the crashing program.
I'd like to analyse this core dump in gdb.
For example:
gdb path/to/executable path/to/corefile
My libraries are in the current directory.
In the past I've seen debuggers implement this by supplying the option "-p ." or "-p /=."; so my question is:
How can I specify that libraries be loaded first from paths relative to my current directory when analysing a corefile in gdb?
A: I'm not sure this is possible at all within gdb but then I'm no expert.
However I can comment on the Linux dynamic linker. The following should print the path of all resolved shared libraries and the unresolved ones.
ldd path/to/executable
We need to know how your shared libraries were linked with your executable. To do this, use the following command:
readelf -d path/to/executable | grep RPATH
*
*Should the command print nothing, the dynamic linker will use standard locations plus the LD_LIBRARY_PATH environment variable to find the shared libraries.
*If the command prints some lines, the dynamic linker will ignore LD_LIBRARY_PATH and use the hardcoded rpaths instead.
If the listed rpaths are absolute, the only solution I know is to copy (or symlink) your libraries to the listed locations.
If the listed rpaths are relative, they will contain a $ORIGIN which will be replaced at run time by the path of the executable. Move either the executable or the libraries to match.
For further informations, you could start with:
man ld.so
A: Start gdb without specifying the executable or core file, then type the following commands:
set solib-absolute-prefix ./usr
file path/to/executable
core-file path/to/corefile
You will need to make sure to mirror your library path exactly from the target system. The above is meant for debugging targets that don't match your host, that is why it's important to replicate your root filesystem structure containing your libraries.
If you are remote debugging a server that is the same architecture and Linux/glibc version as your host, then you can do as fd suggested:
set solib-search-path <path>
If you are trying to override some of the libraries, but not all then you can copy the target library directory structure into a temporary place and use the solib-absolute-prefix solution described above.
A: I found this excerpt on developer.apple.com
set solib-search-path path
If this variable is set, path is a
colon-separated list of directories to
search for shared libraries.
solib-search-path' is used after
solib-absolute-prefix' fails to
locate the library, or if the path to
the library is relative instead of
absolute. If you want to use
solib-search-path' instead of
solib-absolute-prefix', be sure to
set `solib-absolute-prefix' to a
nonexistant directory to prevent GDB
from finding your host's libraries.
EDIT:
I don't think using the above setting prepends the directories I added, but it does seem to append them, so files missing from my current system are picked up in the paths I added. I guess setting the solib-absolute-prefix to something bogus and adding directories in the solib-search-path in the order I need might be a full solution.
A: You can also just set LD_PRELOAD to each of the libraries or LD_LIBRARY_PATH to the current directory when invoking gdb. This will only cause problems if gdb itself tries to use any of the libraries you're preloading.
A: One important note:
if you're doing a cross compiling and trying to debug with gdb, then
after you've done file ECECUTABLE_NAME if you see smth. like :
Using host libthread_db library "/lib/libthread_db.so.1"
then check whether you have libthread_db for your target system. I found a lot of similar problems on the web. Such problem cannot be solved just using "set solib-", you has to build libthread_db using your cross-compiler as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: Create an index on a MySQL column based on the length its contents? How do I create an index on a column in MySQL v 5.0 (myisam db engine) based upon the length of its value its a TEXT data type up to 7000 characters, do I have to add another column with the length of the first column?
A: Yes, as MySQL doesn't support function-based indexes (like ADD INDEX myIndex(LENGTH(text))), you'll need a new int column and define a trigger to auto-update it after inserts and updates.
A: Sounds like a good approach to me (sorry don't know mysql, but in oracle you could set a trigger so that when your main column is updated the "length" column gets automatically updated)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to choose between `class` and `id` When using divs when is it best to use a class vs id?
Is it best to use class, on say font variant or elements within the html? Then use id for the structure/containers?
This is something I've always been a little uncertain on, any help would be great.
A: Use id to identify elements that there will only be a single instance of on a page. For instance, if you have a single navigation bar that you are placing in a specific location, use id="navigation".
Use class to group elements that all behave a certain way. For instance, if you want your company name to appear in bold in body text, you might use <span class='company'>.
A: An additional benefit to using an ID is the ability to target it in an anchor tag:
<h2 id="CurrentSale">Product I'm selling</h2>
Will allow you to in some other place link directly to that spot on the page:
<a href="#CurrentSale">the Current Sale</a>
A common use for this would be to give each headline on a blog a datestamped ID (say id="date20080408") which would allow you to specifically target that section of the blog page.
It is also important to remember that there are more restricted naming rules for ids, primarily that they can't start with a number. See similar SO question: What is a valid value for id attributes in html
A: classes are great when you want to apply similar styles to many different divs or elements. ids are good when you want to address a specific element for formatting or for updating with javascript.
A: The most important thing to understand is that IDs have to be unique: only one element with a given ID should exist within a page. Thus, if you're wanting to refer to a specific page element, that's often the best thing to use.
If you have multiple elements that are in some way similar, you should use the elements class to identify them.
One very useful, surprisingly little known fact is that you can actually apply multiple classes to a single element by putting spaces between the class names. Thus, if you posted a question that was written by the currently logged in user, you might identify it with <div class="question currentAuthor">.
A: The HTML standard itself answers your question:
No two objects may have the same ID, but an arbitrarily amount of objects may have the same class.
So if you want to apply certain CSS style attributes to a single DIV only, that would be an ID. If you want certain style attributes to apply to multiple DIVs, that must be a class.
Note that you can mix both. You can make two DIVs belong to the same class, but give them different IDs. You can then apply the style common to both to the class, and the things specific to either one to their ID. The browser will first apply the class style and then the ID style, so ID styles can overwrite whatever a class has set before.
A: Think of University.
<student id="JonathanSampson" class="Biology" />
<student id="MarySmith" class="Biology" />
Student ID cards are distinct. No two students on campus will have the same student ID card. However, many students can and will share at least one Class with each other.
It's okay to put multiple students under one Class title, Biology 101. But it's never acceptable to put multiple students under one student ID.
When giving Rules over the school intercom system, you can give Rules to a Class:
"Would the Biology class please wear Red Shirts tomorrow?"
.BiologyClass {
shirt-color:red;
}
Or you can give rules to a Specific Student, by calling his unique ID:
"Would Jonathan Sampson please wear a Green Shirt tomorrow?"
#JonathanSampson {
shirt-color:green;
}
A: Some other things to keep in mind:
*
*When using ASP.Net you don't have complete control over the ID of elements, so you pretty much have to use class (note that this is less true than it once was).
*It's best to use neither, when you can help it. Instead, specify the element type and it's parent's type. For example, an unordered list contained inside div with the navarea class could be singled out this way:
div.NavArea ul { /* styles go here */ }
Now you can style the logical division for much of your entire navarea with one class application.
A: IDs must be unique but in CSS they also take priority when figuring out which of two conflicting instructions to follow.
<div id="section" class="section">Text</div>
#section {font-color:#fff}
.section {font-color:#000}
The text would be white.
A: IDs should be unique. CLASSes should be shared. So, if you have some CSS formatting that will be applied to multiple DIV, use a class. If just one (as a requirement, not as happenstance), use an ID.
A: I think we all know what class is, but if you think of IDs as identifiers rather than styling tools, you wont go far wrong. You need an identifier when trying to target something and if you have more than one item with the same ID, you can no longer identify it...
When it comes to writing your css for IDs and CLASSes, it's beneficial to use minimal css classes as far as possible and try not to get too heavy with the IDs until you HAVE to, otherwise you'll constantly be aiming to write stronger declarations and soon have a css file full of !important.
A: Where to use an ID versus a class
The simple difference between the two is that while a class can be used repeatedly on a page, an ID must only be used once per page. Therefore, it is appropriate to use an ID on the div element that is marking up the main content on the page, as there will only be one main content section. In contrast, you must use a class to set up alternating row colors on a table, as they are by definition going to be used more than once.
IDs are an incredibly powerful tool. An element with an ID can be the target of a piece of JavaScript that manipulates the element or its contents in some way. The ID attribute can be used as the target of an internal link, replacing anchor tags with name attributes. Finally, if you make your IDs clear and logical, they can serve as a sort of “self documentation” within the document. For example, you do not necessarily need to add a comment before a block stating that a block of code will contain the main content if the opening tag of the block has an ID of, say, "main", "header", "footer", etc.
A: If it's a style you want to use in multiple places on a page, use a class. If you want a lot of customization for a single object, say a nav bar on the side of the page, then an id is best, because you're not likely to need that combination of styles anywhere else.
A: id is supposed to be the element unique identifier on the page, which helps to manipulate it. Any externally CSS defined style that is supposed to be used in more than one element should go on the class attribute
<div class="code-formatting-style-name" id="myfirstDivForCode">
</div>
A: Use an id for a unique element on the page which you want to do something very specific with, and a class for something which you could reuse on other parts of the page.
A: The id attribute is used for elements to uniquely identify them within the document, whereas the class attribute can have the same value shared by many elements in the same document.
So, really, if there's only one element per document that's going to use the style (e.g., #title), then go with id. If multiple elements can use the style, go with class.
A: I would say it is best to use a class whenever you think a style element is going to be repeated on a page. Items like HeaderImage, FooterBar and the like go well as an ID since you'll only be using it once and it'll help prevent you from accidentally duplicating it, since some editors will alert you when you have duplicate IDs.
I can also see it helpful if you're going to generate the div elements dynamically, and want to target a specific item for formatting; you can just give it the proper ID at generation as opposed to searching for the element and then updating its class.
A: Classes are for styles you may use multiple times on a page, IDs are unique identifiers that define special cases for elements. The standards say that IDs should only be used once in a page. So you should use classes for when you want to use a style on more than one element in a page, and an ID when you just want to use it once.
Another thing is that for classes you can use multiple values (by putting spaces in between each class, a la "class='blagh blah22 blah') wheras with IDs, you can only use on ID per element.
Styles defined for IDs override styles defined for classes, so in , the style settings of #uniquething will override the style of .whatever if the two conflict.
So you should probably use IDs for things like, the header, your 'sidebar' or whatever, and so on - things that only appear once per page.
A: My flavor of choice is to use class identification when css styles are applied to several divs in the same way. If the structure or container is really only applied once, or can inherit its style from the default, I see no reason to use class identification.
So it would depend on whether your div is one of several; for instance, a div representing a question to an answer on the stackoverflow website. Here you would want to define the style for the "Answer" class and apply this to each "Answer" div.
A: If you are using .Net web controls then it is sometimes a lot easier just to use classes as .Net alters web contol div id's at run time (where they are using runat="server").
Using classes allows you to easily build up styles with separate classes for fonts, spacing, borders etc. I generally use multiple files to store separate types of formatting information, e.g. basic_styles.css for simple site wide formatting, ie6_styles.css for browser specific styles (in this case IE6) etc and template_1.css for layout information.
Which ever way you choose try to be consistent to aid maintenance.
A: As well, an element you give a specific id to can be manipulated via JavaScript and DOM commands - GetElementById, for example.
In general, I find it useful to give all of the main structural divs an id - even if initially, I don't have any specific CSS rules to apply, I have that capability there for the future. On the other hand, I use classes for those elements that could be used multiple times - for example, a div containing an image and caption - I might have several classes, each with slightly different styling values.
Remember though, all things being equal, style rules in an id specification take precedence over those in a class specification.
A: In addition to id's and classes being great for setting style on an individual item verses a similar set of items, there's also another caveat to remember. It also depends on the language to some degree. In ASP.Net, you can throw in Div's and have id's. But, if you use a Panel instead of the Div you will lose the id.
A: For me : If use classes when i will do something in CSS or add name to elements and can use in all elements in a page.
So IDs i use for unique element
Ex :
<div id="post-289" class="box clear black bold radius">
CSS :
.clear {clear:both;}
.black {color:black;}
.bold {font-weight:bold;}
.radius {border-radius:2px;}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
}
|
Q: Using Visual Studio's 'cl' from a normal command line Visual Studio 2003 and 2005 (and perhaps 2008 for all I know) require the command line user to run in the 'Visual Studio Command Prompt'. When starting this command prompt it sets various environment variables that the C++ compiler, cl, uses when compiling.
This is not always desirable. If, for example, I want to run 'cl' from within Ant, I'd like to avoid having to run Ant from within the 'Visual Studio Command Prompt'. Running vcvars32.bat isn't an option as the environment set by vcvars32.bat would be lost by the time cl was run (if running from within Ant).
Is there an easy way to run cl without having to run from within the Visual Studio command prompt?
A: For hard set globally environment system variables - add to Path and create Include, LIB.
And change to correct your versions MSVC, Windows SDK, and x86 or x64.
To check variables what you need - can simply run from windows start menu - x64 Native Tools Command Prompt for VS 2019 then type "set path" or "set lib", or "set include".
For example it's my env for compile from cmd.
Path
*
*C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\bin\HostX64\x64
Include
*
*C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\ATLMFC\include
*C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\include
*C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt
*C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared
*C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um
*C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt
*C:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\cppwinrt
LIB
*
*C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\ATLMFC\lib\x64
*C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\lib\x64
*C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\ucrt\x64
*C:\Program Files (x86)\Windows Kits\10\lib\10.0.18362.0\um\x64
Also you can set CL /MD variable to generate code with Dynamic linking runtime libs for less size of Executable - because on default /MT - release with static linking.
Also can flexible use and overwrite option from cmd, but some warning message displayed about changed option.
https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/3600tzxa(v=vs.100)
https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/kezkeayy(v=vs.100)
A: The compilers can be used from command line (or makefiles) just like any other compilers. The main things you need to take care of are the INCLUDE and LIB environment variables, and PATH. If you're running from cmd.exe, you can just run this .bat to set the environment:
C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat
If you're trying to use the compilers from a makefile, Cygwin, MinGW, or something like that you need to set the environment variables manually. Assuming the compiler is installed in the default location, this should work for the Visual Studio 2008 compiler and the latest Windows SDK:
Add to PATH:
*
*C:\Program Files\Microsoft SDKs\Windows\v6.1\Bin
*C:\Program Files\Microsoft Visual Studio 9.0\VC\Bin
*C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE
Add to INCLUDE:
*
*C:\Program Files\Microsoft SDKs\Windows\v6.1\Include
*C:\Program Files\Microsoft Visual Studio 9.0\VC\include
*C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include
Add to LIB:
*
*C:\Program Files\Microsoft SDKs\Windows\v6.1\Lib
*C:\Program Files\Microsoft Visual Studio 9.0\VC\lib
These are the bare minimum, but should be enough for basic things. Study the vcvarsall.bat script to see what more you may want to set.
A: You can simply run the batch file which sets the variables yourself. In VS08 it's located at:-
C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat
A: Create your own batch file (say clenv.bat), and call that instead of cl:
@echo off
:: Load compilation environment
call "C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"
:: Invoke compiler with any options passed to this batch file
"C:\Program Files\Microsoft Visual Studio 9.0\VC\bin\cl.exe" %*
clenv.bat can now be invoked just like cl.exe, except that it will first load the needed environment variables first.
A: What the vcvars32 or vsvars32 batch files do is not rocket science. They simply set the PATH, INCLUDE, LIB, and possibly the LIBPATH environment variables to sensible defaults for the particular compiler version.
All you have to do is make sure that those things are set properly for your Ant or makefile (either before invoking them or within them).
For INCLUDE and LIB/LIBPATH an alternative to setting those items in environment variables is to pass those settings to to command line as explicit parameters.
A: The vcvarsall.bat batch file which is run by the Visual Studio command prompt is simply trying to keep your system environment variables and paths nice and clean (and is important if you have multiple versions of Visual Studio).
If you're happy limiting your setup to one version and having a long path and set of environment variables, transfer these settings (manually) to the System Environment Variables (My Computer|Properties --- or Win-Pause/Break).
I'd recommend against this though!
A: The trick is to always use the correct vcvars batch file. IF you have just one version of VisualStudio installed, that's no big problem. If you're dealing with multiple versions like me, it becomes very easy to run a MSVC++ 14 build in a console that was set up with a MSVC++ 15 vcvars file. It might or might not work, but whatever you're getting will be different from what you'd be building from within VisualStudio.
We have dealt with that issue in terp by deriving the proper vcvars file from the chosen compiler and always setting up the environment internally to the tool invocation. This way, you always have the right vcvars file for the compiler you're using.
Just to reiterate: I highly recommend against trying to duplicate manually what the vcvars file does for you. You're bound to miss something or get it just right enough that it looks like it's working while actually doing something slightly different from what you wanted.
A: My version of opening the visual studio command line for Visual Studio Command Prompt in visual-studio-2010. Used internally to build a library/project and then perform some extra steps with the resulting DLL files.
Copy these lines to your Compile and execute other steps.cmd file, or similar.
@echo off
REM Load Visual Studio's build tools
call "%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\VC\vcvarsall.bat" x86
REM Choose what you want to do, 1 or 2 by (un)commenting
REM 1. Add your cl.exe (or msbuild.exe or other) commands here
REM msbuild.exe MyProject.csproj
REM cl.exe
REM custom-step.exe %*
REM pause
REM 2. Open a normal interactive system command shell with all variables loaded
%comspec% /k
In this version of the script, I "stay" in interactive command line mode afterwards. Comment to REM %comspec% /k to only use the script for non-interactive purposes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: How do you override the string representation the HTML helper methods use for a model’s properties? The html helper methods check the ViewDataDictionary for a value. The value can either be in the dictionary or in the Model, as a property. To extract the value, an internal sealed class named the ViewDataEvaluator uses PropertyDescriptor to get the value. Then, Convert.ToString() is called to convert the object returned to a string.
Desired code in Controller action
The controller action should only populate the Model, not format it (formatting the model is global).
Desired code in View
The view can render a HTML textbox and extract the string representation of the property with this line of code:
<%=Html.TextBox(“Date”) %>
<%=Html.TextBox(“Time”) %>
<%=Html.TextBox(“UnitPrice”) %>
Binding Model's Property to HtmlHelper.TextBox()
For the textbox’s value, the UnitPrice property’s value from the model instance is converted to a string. I need to override this behavior with my own conversion to a string, which is per property – not per type. For example, I need a different string representation of a decimal for UnitPrice and another string representation of a decimal for UnitQuantity.
For example, I need to format the UnitPrice's decimal precision based on the market.
string decimalPlaces = ViewData.Model.Precision.ToString ();
<%=Html.TextBox(“UnitPrice”, ViewData.Model.TypeName.UnitPrice.ToString("N" + decimalPlaces)) %>
2-way databinding please
Just like the IModelBinder is the Parse for each property of the model, I need a Format for each property, kinda like Windows Forms binding, but based on the model instead of the control. This would enable the model to round-trip and have proper formatting. I would prefer a design where I could override the default formatting. In addition, my model is in a separate assembly, so attributed properties specifying a formatter are not an option.
Please note I need property specific formatting for a model, not type specific formatting.
A: There's no way to specify a format with the helpers themselves. The approach you've taken will work. Another approach is to add the value pre-formatted into the ModelState.
EDIT: Are you sure you even want to format a text input with the currency? For example, what you would see in the input is:
<input type="text" name="UnitPrice" value="$1.23" />
When you post that back to the server, we won't understand it. Instead, I'd put the currency symbol outside of the text input. For example:
$<%= Html.TextBox("UnitPrice") %>
I'm sure there's an easy method to render "$" without hard-coding it so it's localizable, but I don't know what it is offhand.
EDIT AGAIN
A comment from a developer on my team:
Well, to be fair, this isn’t that bad.
Often when you format a number or a
date it’s still understandable coming
back in. For example, padding a
number (like a ZIP code) to 5 digits,
padding a decimal to the hundredths,
formatting a date to be yyyy-mm-dd,
etc. will come in just fine. Adding
extra characters like currency symbols
will break, but normally input fields
don’t take or display currency symbols
anyway – it’s implied.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Converting an integer to a hexadecimal string in Ruby Is there a built in way to convert an integer in Ruby into its hexadecimal equivalent?
Something like the opposite of String#to_i:
"0A".to_i(16) #=>10
Like perhaps:
"0A".hex #=>10
I know how to roll my own, but it's probably more efficient to use a built in Ruby function.
A: How about using %/sprintf:
i = 20
"%x" % i #=> "14"
A: To summarize:
p 10.to_s(16) #=> "a"
p "%x" % 10 #=> "a"
p "%02X" % 10 #=> "0A"
p sprintf("%02X", 10) #=> "0A"
p "#%02X%02X%02X" % [255, 0, 10] #=> "#FF000A"
A: Just in case you have a preference for how negative numbers are formatted:
p "%x" % -1 #=> "..f"
p -1.to_s(16) #=> "-1"
A: You can give to_s a base other than 10:
10.to_s(16) #=> "a"
Note that in ruby 2.4 FixNum and BigNum were unified in the Integer class.
If you are using an older ruby check the documentation of FixNum#to_s and BigNum#to_s
A: Here's another approach:
sprintf("%02x", 10).upcase
see the documentation for sprintf here: http://www.ruby-doc.org/core/classes/Kernel.html#method-i-sprintf
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "217"
}
|
Q: Performance: call-template vs apply-template in XSLT processing, is there a performance difference between apply-template and call-template? In my stylesheets there are many instances where I can use either, which is the best choice?
A: As with all performance questions, the answer will depend on your particular configuration (in particular the XSLT processor you're using) and the kind of processing that you're doing.
<xsl:apply-templates> takes a sequence of nodes and goes through them one by one. For each, it locates the template with the highest priority that matches the node, and invokes it. So <xsl:apply-templates> is like a <xsl:for-each> with an <xsl:choose> inside, but more modular.
In contrast, <xsl:call-template> invokes a template by name. There's no change to the context node (no <xsl:for-each>) and no choice about which template to use.
So with exactly the same circumstances, you might imagine that <xsl:call-template> will be faster because it's doing less work. But if you're in a situation where either <xsl:apply-templates> or <xsl:call-template> could be used, you're probably going to be doing the <xsl:for-each> and <xsl:choose> yourself, in XSLT, rather than the processor doing it for you, behind the scenes. So in the end my guess it that it will probably balance out. But as I say it depends a lot on the kind of optimisation your processor has put into place and exactly what processing you're doing. Measure it and see.
My rules of thumb about when to use matching templates and when to use named templates are:
*
*use <xsl:apply-templates> and matching templates if you're processing individual nodes to create a result; use modes if a particular node needs to be processed in several different ways (such as in the table of contents vs the body of a document)
*use <xsl:call-template> and a named template if you're processing something other than an individual node, such as strings or numbers or sets of nodes
*(in XSLT 2.0) use <xsl:function> if you're returning an atomic value or an existing node
A: It may depend on the xml parser you are using. I can't speak for anything but .NET 2003 parser where I did some informal performance tests on push vs pull XSLT code. This is similar to what you are asking: apply-template = push and call-template = pull. I was convinced push would be faster, but that was not the case. It was about even.
Sorry I don't have the exact tests now. I recommend trying it out with your parser of choice and see if there is any major difference. My bet is there won't be.
A: apply-template and call-template do not perform the same task, performance comparison is not really relevant here.
call-template takes a template name as a parameter whereas apply-template takes an xpath expression. Apply-template is therefore much more powerful since you do not really know which template will be executed.
You will get performance issues if you use complex xpath expressions. Avoid "//" in your xpath expressions since every node of your input document will be evaluated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: Is it legal to pass a newly constructed object by reference to a function? Specifically, is the following legal C++?
class A{};
void foo(A*);
void bar(const A&);
int main(void)
{
foo(&A()); // 1
bar(A()); // 2
}
It appears to work correctly, but that doesn't mean it's necessarily legal. Is it?
Edit - changed A& to const A&
A: foo is not allowed in fully standard compliant C++, whereas bar is okay. Though chances are, foo will compile with warning, and bar may or may not compile with a warning as well.
A() create a temporary object, which unless bound to a reference (as is the case in bar), or used to initialize a named object, is destroyed at the end of the full expression in which it was created. A temporary created to hold a reference initializer persists until the end of its reference's scope. For the case of bar, that's the function call, so you can use A inside bar perfectly safely. It is forbidden to bound a temporary object (which is a rvalue) to a non-const reference. It is similarly forbidden to take the address of a rvalue (to pass as argument to initialize A for foo).
A: Short answer is yes.
If the object is received by function as const reference parameter - as you have modified bar(const A&) method, then it's totally legal. The function can operate on the object, but the object will be destructed after the function call (address of temporary can be taken, but shall not be stored and used after the function call - see reason below).
The foo(A*) is legal too because the temporary object is destroyed at the end of fullexpression. However most of the compiler will emit warning about taking address of temporary.
The original version of bar(A&) shall not compile, it's against the standard to initialize a non-const reference from a temporary.
C++ standard chapter 12.2
3 [...] Temporary objects are destroyed as the last step in evaluating the fullexpression (1.9) that (lexically) contains the point where they were created. [...]
4 There are two contexts in which temporaries are destroyed at a different point than the end of the fullexpression. The first context is when an expression appears as an initializer for a declarator defining an object. In that context, the temporary that holds the result of the expression shall persist until the object’s initialization is complete. [...]
5 The second context is when a reference is bound to a temporary. The temporary to which the reference is bound or the temporary that is the complete object to a subobject of which the temporary is bound persists for the lifetime of the reference except as specified below. A temporary bound to a reference member in a constructor’s ctorinitializer (12.6.2) persists until the constructor exits. A temporary bound to a reference parameter in a function call (5.2.2) persists until the completion of the full expression containing the call.
A temporary bound to the returned value in a function return statement (6.6.3) persists until the function exits.
A fullexpression is an expression that is not a subexpression of another expression.
A: 1: Taking the address of a temporary is not allowed. Visual C++ allows it as a language extension (language extensions are on by default).
2: This is perfectly legal.
A: No, it's against the standard to pass a non-const reference to a temporary object. You can use a const reference:
class A{};
void bar(const A&);
int main(void)
{
bar(A()); // 2
}
So while some compliers will accept it, and it would work as long as don't use the memory after the semicolon, a conforming compiler will not accept it.
A: Those A objects will only exist until execution reaches the semicolon. So, the calls are safe, but don't try to save the pointer and use it later. Also, the compiler may require bar take a const reference.
A: It looked lke it would work, but it did not compile with g++ with the Wall option, here is what I get:
michael@hardy-lenovo:~/Desktop$ g++ -Wall a.cpp
a.cpp: In function ‘int main()’:michael@hardy-lenovo:~/Desktop$ g++ -Wall a.cpp
a.cpp: In function ‘int main()’:
a.cpp:8: warning: taking address of temporary
a.cpp:9: error: invalid initialization of non-const reference of type ‘A&’ from a temporary of type ‘A’
a.cpp:4: error: in passing argument 1 of ‘void bar(A&)’
michael@hardy-lenovo:~/Desktop$
Looks like you will need to use a constant reference.
A: Perfectly legal.
The object will exist on the stack during the function call, just like any other local variable as well.
A: It is legal. We use it sometime to provide a default value which we might want to ignore.
int dosomething(error_code& _e = ignore_errorcode()) {
//do something
}
In the above case it will construct an empty error code object if no error_code is passed to the function.
A: for //2 you need a const reference
for //1 I think it's legal but useless
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: XML Serialize boolean as 0 and 1 The XML Schema Part 2 specifies that an instance of a datatype that is defined as boolean can have the following legal literals {true, false, 1, 0}.
The following XML, for example, when deserialized, sets the boolean property "Emulate" to true.
<root>
<emulate>1</emulate>
</root>
However, when I serialize the object back to the XML, I get true instead of the numerical value. My question is, is there a way that I can control the boolean representation in the XML?
A: You can also do this by using some XmlSerializer attribute black magic:
[XmlIgnore]
public bool MyValue { get; set; }
/// <summary>Get a value purely for serialization purposes</summary>
[XmlElement("MyValue")]
public string MyValueSerialize
{
get { return this.MyValue ? "1" : "0"; }
set { this.MyValue = XmlConvert.ToBoolean(value); }
}
You can also use other attributes to hide this member from intellisense if you're offended by it! It's not a perfect solution, but it can be quicker than implementing IXmlSerializable.
A: You can implement IXmlSerializable which will allow you to alter the serialized output of your class however you want. This will entail creating the 3 methods GetSchema(), ReadXml(XmlReader r) and WriteXml(XmlWriter r). When you implement the interface, these methods are called instead of .NET trying to serialize the object itself.
Examples can be found at:
http://www.developerfusion.co.uk/show/4639/ and
http://msdn.microsoft.com/en-us/library/system.xml.serialization.ixmlserializable.aspx
A: No, not using the default System.Xml.XmlSerializer: you'd need to change the data type to an int to achieve that, or muck around with providing your own serialization code (possible, but not much fun).
However, you can simply post-process the generated XML instead, of course, either using XSLT, or simply using string substitution. A bit of a hack, but pretty quick, both in development time and run time...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: Reuse of SQL stored procedures across applications I'm curious about people's approaches to using stored procedures in a database that is accessed by many applications. Specifically, do you tend to keep different sets of stored procedures for each application, do you try to use a shared set, or do you do a mix?
On the one hand, reuse of SPs allows for fewer changes when there is a model change or something similar and ideally less maintenance. On the other hand, if the needs of the applications diverge, changes to a stored procedure for one application can break other applications. I should note that in our environment, each application has its own development team, with poor communication between them. The data team has better communication though, and is mostly tasked with the stored procedure writing.
Thanks!
A: Stored procedures should be created based on the data you intend to return, not the application making the request. If you have a stored procedure that is GetAllItems, it should return all of the items in the database. If one of the applications would like to get all of the items by category, create GetAllItemsByCategory. There is no reason for the business rules of a stored procedure to change based on the application requesting the data.
A: My experience has been that having SPs shared by multiple applications is a cause of pain. In fact, I would argue that having a database that is accessed directly by more than one application is not the best long term architecture.
The pattern I recommend and have implemented is that only one application should "own" each database, and provide APIs (services, etc.) for other applications to access and modify data.
This has several advantages:
*
*The owning application can apply any business logic, logging, etc. to make sure it remains stable
*If the schema is changed, all interfaces are known and can be tested to make sure external applications will still work
A: Stored procedures should expose business rules which don't change depending on the application using them. This lets the rules be stored and updated once instead of every place they are used, which is a nightmare.
A: Think of it this way: your stored procedures are about the data that's under them, and shouldn't really know about the applications above them. It's possible that one application will need to read or update data in a way that another doesn't, and so one would use SPs that the other wouldn't.
If it were my application / database / etc, and changes to an SP to improve one application broke another, I would consider that evidence of a deeper design issue.
A: The last portion of your question I believe answered itself.
With already poor communication, sharing procedures between development teams would just add to the potential points of failure and could cause either team hardship.
If I'm on the same team working on multiple projects we will save some time and share procedures, but typically I have found that a little duplication (A few procedures here and there) helps avoid the catastrophic changes/duplication needed later when the applications start to diverge.
LordScarlet also points out a key element as well, if it is generic with no business logic sharing it shouldn't be an issue.
A: Whenever we had stored procedures that were common to multiple applications, we would create a database just for those procedures (and views and tables, etc). That database (we named "base") would then have a developer (or team) responsible for it (maintenance and testing).
If a different team needed new functionality, they could write it and the base developer would either implement it in the base DB or suggest a simpler way.
A: It all depends on your abstraction strategy. Are the stored procedures treated as a discrete point of abstraction, or are they treated as just another part of the application that calls them.
The answer to that will tell you how to manage them. If they are a discrete abstraction, they can be shared, as if you need new functionality, you'll add new procedures. If they are part of the app that calls them, they shouldn't be shared.
A: We try to use a single, shared stored proc wherever possible, but we've run into the situation you describe as well. We handled it by adding an application prefix to the stored procs (ApplicationName_StoredProcName).
Often these stored procs call the centralized or "master" stored proc, but this method leaves room for app specific changes down the road.
A: I don't think sharing Sprocs among multiple applications makes sense.
I can see the case for sharing a database in related applications, but presumably those applications are separate in large part because they treat the data very differently from one another.
Using the same architecture could work across applications, but imagine trying to use the same business logic layer in multiple applications. "But wait!" you say, "That's silly... if I'm using the same BLL, why would I have a separate app? They do the same thing!"
QED.
A: Ideally use one proc not multiple versions. If you need versions per customer, investigate the idea of 1 db per customer as opposed to 1 db for all customers. This also allows for some interesting staging of db's on different servers ( allocate the larger/heavier usage ones to bigger servers while smaller ones can share hardware)
A: If you look for ability to share the SQL code try building a library of abstract functions. This way you could reuse some code which does generic things and keep business logic separate for each application. The same could be done with the views - they could be kept quite generic and useful for many applications.
You will probably find out that there is not that many uses for common stored procedures as you go along.
That said we once implemented a project which was working with a very badly designed legacy database. We've implemented a set of stored procedures which made information retrieval easy. When other people from other teams wanted to use the same information we refactored our stored procedures to make them more generic, added an extra layer of comments and documentation and allowed other people to use our procedures. That solution worked rather well.
A: Many stored procedures are application independent but there may be a few that are application dependent. For example, the CRUD (Create, Select, Update, Delete) stored procedures can go across applications. In particular you can throw in auditing logic (sometimes done in triggers but there is a limit to how complicated you can get in triggers). If you have some type of standard architecture in your software shop the middle tier may require a stored procedure to create/select/update/delete from the database regardless of the application in which case the procedure is shared.
At the same time there may be some useful ways of viewing the data, ie GetProductsSoldBySalesPerson, etc.. which will also be application independent. You may have a bunch of lookup tables for some fields like department, address, etc. so there may be a procedure to return a view of the table with all the text fields. Ie instead of SalesPersonID, SaleDate, CustomerID, DepartmentID, CustomerAddressID the procedure returns a view SalesPersonName, SaleDate, CustomerName, DepartmentName, CustomerAddress. This could also be used across applications. A customer relationship system would want Customer Name/Address/Other Attributes as would a billing system. So something that did all the joins and got all the customer information in one query would probably be used across applications. Admittedly creating ways to view the data is the domain of a view, but often people used stored procedures to do this.
So basically, when deleting from your table do you need to delete from 3 or 4 other tables to ensure data integrity. is the logic too complicated for a trigger? Then a stored procedure that all applications use to do deletions may be important. The same goes for things that need to be done on creation. If there are common joins that are always done, it may make sense to have one stored procedure to do all the joins. Then if later you change the tables around you could keep the procedure the same and just change the logic there.
A: The concept of sharing a data schema across multiple applications is a tough one. Invariably, your schema gets compromised for performance reasons: denormalization, which indexes to create. If you can cut the size of a row in half, you can double the number of rows per page and, likely, halve the time it takes to scan the table. However, if you only include 'common' features on the main table and keep data only of interest to specific applications on different (but related) tables, you have to join everywhere to get back to the 'single table' idea.
More indexes to support different applications will cause ever-increasing time to insert, update and delete data from each table.
The database server will often become a bottleneck as well, because databases cannot be load-balanced. You can partition your data across multiple servers, but that gets very complicated too.
Finally, the degree of co-ordination required is typically huge, no doubt with fights between different departments over whose requirements get priority, and new developments will get bogged down.
In general, the 'isolated data silo per application' model works better. Almost everything we do - I work for a contract software house - is based on importing data from and exporting data to other systems, with our applications' own databases.
It may well be easier in data warehouse/decision support systems; I generally work on OLTP systems where transaction performance is paramount.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Problem with Java 1.6 and Desktop.open() I've been using Destop.open() to launch a .pdf viewer on Windows machines, both Vista and XP, and most of them work just fine. However, on one XP machine the call does not work, simply returning without throwing any exceptions, and the viewer does not launch. On that machine the file association is properly set up as far as I can tell: double-clicking a .pdf works, as does the "start xxx.pdf" command on the command prompt. I'm thinking it must be a Windows configuration issue, but can't put my finger on it.
Has anyone else seen this problem?
A: This is a known problem with early versions of XP SP2, the ShellExecute function stopped accepting URIs; bring the XP machines patches up to date.
To view the exceptions make sure the Java Console is turned on:
Control Panel->Java Control Panel->Advanced->Java Console.
A: I couldn't find the answer anywhere but I have two machines with Windows 7 64 bit that fail the Desktop.getDesktop().open(file) with failed to open file or access is denied error on java 6 and java 7.
Windows Explorer is able to open applications based on the filename with extension:
Runtime rt = Runtime.getRuntime();
rt.exec(new String[]{"explorer", "C:\\myfile.pdf"});
rt.exec(new String[]{"explorer", "C:\\myfile.wmv"});
A: I still have this problem with one of my customers, I'll check what version of windows (As far as I remember He uses windows 7, 64-bits). The file association with pdf is OK (checked that). And he uses the latest java version (checked the updates of java), so still an actual problem as far as I Am concerned.....
However i ran in to this bug report:
sun bug report 6764271
There is says it might have something to do with the registration of some of the adobe versions (using READ in stead of OPEN in the windows registry).
Still a shame a bug like this is low on prio and still an open bug (reported in 2008).
I'll check with my customer soon and update my answer here as soon as I got it resolved.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to call a function dynamically that is part of an instantiated cfc, without using Evaluate()? For example I want to be able to programatically hit a line of code like the following where the function name is dynamically assigned without using Evaluate(). The code below of course doesn't work but represents what I would like to do.
application.obj[funcName](argumentCollection=params)
The only way I can find to call a function dynamically is by using cfinvoke, but as far as I can tell that instantiates the related cfc/function on the fly and can't use a previously instantiated cfc.
Thanks
A: According to the docs, you can do something like this:
<!--- Create the component instance. --->
<cfobject component="tellTime2" name="tellTimeObj">
<!--- Invoke the methods. --->
<cfinvoke component="#tellTimeObj#" method="getLocalTime" returnvariable="localTime">
<cfinvoke component="#tellTimeObj#" method="getUTCTime" returnvariable="UTCTime">
You should be able to simply call it with method="#myMethod#" to dynamically call a particular function.
A: You can use cfinvoke. You don't have to specify a component.
<cfinvoke method="application.#funcName#" argumentCollection="#params#">
A: You can also do something very similar, to the way you wanted to use it. You can access the method within the object using the syntax you used, you just can't call it at the same time. However, if you assign it to a temp variable, you can then call it
<!--- get the component (has methods 'sayHi' and a method 'sayHello') --->
<cfset myObj = createObject("component", "test_object")>
<!--- set the function that we want dynamically then call it (it's a two step process) --->
<cfset func = "sayHi">
<cfset funcInstance = myObj[func]>
<cfoutput>#funcInstance("Dave")#</cfoutput>
<cfset func = "sayHello">
<cfset funcInstance = myObj[func]>
<cfoutput>#funcInstance("Dave")#</cfoutput>
A: In CFML, functions are first-class members of the language. This allows us to pass them around like a variable. In the following example I will copy the function named 'foobar' and rename it "$fn" within the same object. Then, we can simply call $fn().
funcName = 'foobar';
application.obj.$fn = application.obj[funcName];
application.obj.$fn(argumentCollection=arguments);
The context of the function is important, especially if it references any values in the 'variables' or 'this' scope of the object. Note: this is not thread safe for CFC instances in shared scopes!
The fastest method is to use Ben Doom's recommendation. I just wanted to be thorough.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do you create a MANIFEST.MF that's available when you're testing and running from a jar in production? I've spent far too much time trying to figure this out. This should be the simplest thing and everyone who distributes Java applications in jars must have to deal with it.
I just want to know the proper way to add versioning to my Java app so that I can access the version information when I'm testing, e.g. debugging in Eclipse and running from a jar.
Here's what I have in my build.xml:
<target name="jar" depends = "compile">
<property name="version.num" value="1.0.0"/>
<buildnumber file="build.num"/>
<tstamp>
<format property="TODAY" pattern="yyyy-MM-dd HH:mm:ss" />
</tstamp>
<manifest file="${build}/META-INF/MANIFEST.MF">
<attribute name="Built-By" value="${user.name}" />
<attribute name="Built-Date" value="${TODAY}" />
<attribute name="Implementation-Title" value="MyApp" />
<attribute name="Implementation-Vendor" value="MyCompany" />
<attribute name="Implementation-Version" value="${version.num}-b${build.number}"/>
</manifest>
<jar destfile="${build}/myapp.jar" basedir="${build}" excludes="*.jar" />
</target>
This creates /META-INF/MANIFEST.MF and I can read the values when I'm debugging in Eclipse thusly:
public MyClass()
{
try
{
InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF");
Manifest manifest = new Manifest(stream);
Attributes attributes = manifest.getMainAttributes();
String implementationTitle = attributes.getValue("Implementation-Title");
String implementationVersion = attributes.getValue("Implementation-Version");
String builtDate = attributes.getValue("Built-Date");
String builtBy = attributes.getValue("Built-By");
}
catch (IOException e)
{
logger.error("Couldn't read manifest.");
}
}
But, when I create the jar file, it loads the manifest of another jar (presumably the first jar loaded by the application - in my case, activation.jar).
Also, the following code doesn't work either although all the proper values are in the manifest file.
Package thisPackage = getClass().getPackage();
String implementationVersion = thisPackage.getImplementationVersion();
Any ideas?
A: You want to use this:
Enumeration<URL> resources = Thread.currentThread().getContextClassLoader().getResources("META-INF/MANIFEST.MF");
You can parse the URL to figure out WHICH jar the manifest if from and then read the URL via getInputStream() to parse the manifest.
A: ClassLoader.getResource(String) will load the first manifest it finds on the classpath, which may be the manifest for some other JAR file. Thus, you can either enumerate all the manifests to find the one you want or use some other mechanism, such as a properties file with a unique name.
A: You can get the manifest for an arbitrary class in an arbitrary jar without parsing the class url (which could be brittle). Just locate a resource that you know is in the jar you want, and then cast the connection to JarURLConnection.
If you want the code to work when the class is not bundled in a jar, add an instanceof check on the type of URL connection returned. Classes in an unpacked class hierarchy will return a internal Sun FileURLConnection instead of the JarUrlConnection. Then you can load the Manifest using one of the InputStream methods described in other answers.
@Test
public void testManifest() throws IOException {
URL res = org.junit.Assert.class.getResource(org.junit.Assert.class.getSimpleName() + ".class");
JarURLConnection conn = (JarURLConnection) res.openConnection();
Manifest mf = conn.getManifest();
Attributes atts = mf.getMainAttributes();
for (Object v : atts.values()) {
System.out.println(v);
}
}
A: You can access the manifest (or any other) file within a jar if you use the same class loader to as was used to load the classes.
this.getClass().getClassLoader().getResourceAsStream( ... ) ;
If you are multi-threaded use the following:
Thread.currentThread().getContextClassLoader().getResourceAsStream( ... ) ;
This is also a realy useful technique for including a default configuration file within the jar.
A: Here's what I've found that works:
packageVersion.java:
package com.company.division.project.packageversion;
import java.io.IOException;
import java.io.InputStream;
import java.util.jar.Attributes;
import java.util.jar.Manifest;
public class packageVersion
{
void printVersion()
{
try
{
InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF");
if (stream == null)
{
System.out.println("Couldn't find manifest.");
System.exit(0);
}
Manifest manifest = new Manifest(stream);
Attributes attributes = manifest.getMainAttributes();
String impTitle = attributes.getValue("Implementation-Title");
String impVersion = attributes.getValue("Implementation-Version");
String impBuildDate = attributes.getValue("Built-Date");
String impBuiltBy = attributes.getValue("Built-By");
if (impTitle != null)
{
System.out.println("Implementation-Title: " + impTitle);
}
if (impVersion != null)
{
System.out.println("Implementation-Version: " + impVersion);
}
if (impBuildDate != null)
{
System.out.println("Built-Date: " + impBuildDate);
}
if (impBuiltBy != null)
{
System.out.println("Built-By: " + impBuiltBy);
}
System.exit(0);
}
catch (IOException e)
{
System.out.println("Couldn't read manifest.");
}
}
/**
* @param args
*/
public static void main(String[] args)
{
packageVersion version = new packageVersion();
version.printVersion();
}
}
Here's the matching build.xml:
<project name="packageVersion" default="run" basedir=".">
<property name="src" location="src"/>
<property name="build" location="bin"/>
<property name="dist" location="dist"/>
<target name="init">
<tstamp>
<format property="TIMESTAMP" pattern="yyyy-MM-dd HH:mm:ss" />
</tstamp>
<mkdir dir="${build}"/>
<mkdir dir="${build}/META-INF"/>
</target>
<target name="compile" depends="init">
<javac debug="on" srcdir="${src}" destdir="${build}"/>
</target>
<target name="dist" depends = "compile">
<mkdir dir="${dist}"/>
<property name="version.num" value="1.0.0"/>
<buildnumber file="build.num"/>
<manifest file="${build}/META-INF/MANIFEST.MF">
<attribute name="Built-By" value="${user.name}" />
<attribute name="Built-Date" value="${TIMESTAMP}" />
<attribute name="Implementation-Vendor" value="Company" />
<attribute name="Implementation-Title" value="PackageVersion" />
<attribute name="Implementation-Version" value="${version.num} (b${build.number})"/>
<section name="com/company/division/project/packageversion">
<attribute name="Sealed" value="false"/>
</section>
</manifest>
<jar destfile="${dist}/packageversion-${version.num}.jar" basedir="${build}" manifest="${build}/META-INF/MANIFEST.MF"/>
</target>
<target name="clean">
<delete dir="${build}"/>
<delete dir="${dist}"/>
</target>
<target name="run" depends="dist">
<java classname="com.company.division.project.packageversion.packageVersion">
<arg value="-h"/>
<classpath>
<pathelement location="${dist}/packageversion-${version.num}.jar"/>
<pathelement path="${java.class.path}"/>
</classpath>
</java>
</target>
</project>
A: I've found the comment by McDowell to be true - which MANIFEST.MF file gets picked up depends on the classpath and might not be the one wanted. I use this
String cp = PCAS.class.getResource(PCAS.class.getSimpleName() + ".class").toString();
cp = cp.substring(0, cp.indexOf(PCAS.class.getPackage().getName()))
+ "META-INF/MANIFEST.MF";
Manifest mf = new Manifest((new URL(cp)).openStream());
which I adapted from link text
A: Just don't use the manifest. Create a foo.properties.original file, with a content such as
version=@VERSION@
And in ther same task you are jaring you can do a copy to copu foo.properties.original and then
A: I will also usually use a version file. I will create one file per jar since each jar could have its own version.
A: You can use a utility class Manifests from jcabi-manifests that automates finding and parsing of all MANIFEST.MF files available in classpath. Then, you read any attribute with a one liner:
final String name = Manifests.read("Build-By");
final String date = Manifests.read("Build-Date");
Also, check this out: http://www.yegor256.com/2014/07/03/how-to-read-manifest-mf.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: What is the best way to share files between Xen VM's? Is there some built-in way to share files between Xen guests? I don't currently need to share the actual images, just some data files.
A: Do you mean between Xen and the host or between different guests?
If guests, I don't think there is something provided but you should probably use NFS as it is the best supported file system that supports a decent number of permissions and attributes but assumes a trusted network.
A: You can also try glusterfs or lustre to share files between Xen guests.
A: I ran into this same issue and my solution was to add an extra virtual drive to each guest which pointed to a shared lvm or device.
So xvda3 was pointed at /dev/md0 in the guest configuration file. You could just as easily use /dev/vg0/lg0 for an logical volume.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I use perldoc to lookup the %ENV variable? I find from reading perldoc perlvar, about a thousand lines in is help for %ENV. Is there a way to find that from the command line directly?
On my Windows machine, I've tried the following
perldoc ENV
perldoc %ENV
perldoc %%ENV
perldoc -r ENV (returns info about Use Env)
perldoc -r %ENV
perldoc -r %%%ENV
perldoc -r %%%%ENV (says No documentation found for "%ENV")
None actually return information about the %ENV variable.
How do I use perldoc to find out about %ENV, if I don't want to have to eye-grep through thousands of line?
I've tried the suggested "perldoc perlvar" and then typing /%ENV, but nothing happens.
perl -v: This is perl, v5.8.0 built for MSWin32-x86-multi-thread
Though I've asked about %ENV, this also applies to any general term, so knowing that %ENV is in perlvar for this one example won't help me next time when I don't know which section.
Is there a way to get perldoc to dump everything (ugh) and I can grep the output?
A: perldoc doesn't have an option to search for a particular entry in perlvar (like -f does for perlfunc). General searching is dependent on your pager (specified in the PAGER environment variable). Personally, I like "less." You can get less for windows from the GnuWin32 project.
A: Check out the latest development version of Pod::Perldoc. I submitted a patch which lets you do this:
$ perldoc -v '%ENV'
%ENV
$ENV{expr}
The hash %ENV contains your current environment. Setting a value in
"ENV" changes the environment for any child processes you subsequently
fork() off.
A: The searching for %ENV is a feature of the pager named 'less', not of perldoc. So if perldoc uses a different pager, this might not work.
Activestate Perl comes with HTML documentation, you can open perlvar in your browser, hit Ctrl+f and type %ENV, then hit enter.
A: I use Apache::Perldoc (old, but still does its job) on my local machine to browse the local documentation. If I have net access though, I just look at perldoc.perl.org and search. However, in this case, search isn't useful for the variables and it's better to use the Special variables link at the left of the page.
As you get more experience with Perl, you'll know where to look for the documentation. For know, you might have to refer to perltoc, but after awhile you'll know to look for functions in perlfunc, variables in perlvar, and so on.
You might also use my Perl documentation documentation.
A: *
*Install unixutils for Windows
*Call:
perldoc perlvar | grep -A10 %env
A: firefox http://perldoc.perl.org/perlvar.html#%ENV
By the way, many many many bugs have been fixed since 5.8.0.
A: If you'd like to see the contents of your %ENV, you can use Data::Dumper to print it out in a rather readable format:
perl -MData::Dumper -e 'print Dumper \%ENV'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What's your favorite "programmer" cartoon? Personally I like this one:
P.S. Do not hotlink the cartoon without the site's permission please.
A: Without a doubt...
A:
("40% of OpenBSD installs lead to shark attacks. It's their only standing security issue.")
A: I like this one: http://xkcd.com/149/
(Proper User Policy apparently means Simon Says.)
A: Fax me some electricity, please?
A:
A:
alt text http://www.netfxharmonics.com/Images/WindowsLiveWriter/ComicStrip2.NETandPHPSourceCode_14857/DotNetPhpCartoon02.png
A:
A:
A: It is hideous but I can't look away. THE KRAMER http://www.geekherocomic.com/comics/2009-08-19-a-coding-paradox.png
A:
A: Quite demonstrative of why metric based incentives and engineers don't go hand in hand.
A: alt text http://www.phdcomics.com/comics/archive/phd120804s.gif
A: There's a Russian version of Bash.org at Bash.org.ru. What they do now is they take favorite quotes and turn them into cartoons or comic strips. Here's one of my favorite ones:
Tester http://s.bash.org.ru/img/67c5y2emi7vwmrnd263759.jpg
*
*We've got a clever tester now
-What do you mean?
*He's found bunch of stuff... A funny symbol there, a weird key combination here. He's real good
*But sometimes... I really want to smash his face
Link to site: http://bash.org.ru/comics/20080111
A: From Bug Bash:
alt text http://www.bugbash.net/strips/bug-bash20070723.gif
A: alt text http://www.geekherocomic.com/comics/2009-08-03-hardcoded-limits.png
Was I can do stackoverflow in a weekend the inspiration for this cartoon?
A: I wrote a production website that has the path /dev/random/ return 4 because of this comic.
A:
A:
*
*(http://thedailywtf.com)
A:
While not specifically about programming per se....wanting to program games is what got me interested in IT in the first place....
anyway this one made me laugh so hard when I saw it!
A: John Martz has a blog entry to go with this.
A: Tragically, my favorite cartoon is too old to be on the interwebs.
It's from Datamation. (Remember Datamation?) A man is sitting at a desk. There are bits of electronics everywhere. Rubble is strewn into every corner of the room. His face and hair are singed.
He is thinking "It's never done that before."
A: alt text http://images.orkut.com/orkut/albums2/ATgAAAB3idBHMFYig7ck6I4oqJ8HtkBHuqkha0UeOIkUaSj8LqiwUPIFIkQVJp2eMtTyQOpxLY3R6XSqqo6vsgv-wURQAJtU9VCzBj_EgcyVWltZNcREVSR-daV-1w.jpg
A:
http://www.qwantz.com/comics/comic2-1512.png
A: HAI
CAN HAS STDIO?
I HAS A VAR
IM IN YR LOOP
UP VAR!!1
VISIBLE VAR
IZ VAR BIGGER THAN 10? KTHXBYE
IM OUTTA YR LOOP
KTHXBAI
Taken from :http://lolcode.com/
KTHXBAI!!
A: XKCD Comic 303 - "Compiling"
('Are you stealing those LCDs?' 'Yeah, but I'm doing it while my code compiles')
I have this one pinned to the wall facing the entrance to our office :)
A: Here's a new one from Dinosaur Comics:
ladies and gentlemen: mr. stack overflow! http://www.qwantz.com/comics/comic2-82.png
A:
A:
A: This has made my year.
Some engineer out there has solved P=NP and it's locked up in an electric eggbeater calibration routine. For every 0x5f375a86 we learn about, there are thousands we never see.
A:
A: alt text http://globalnerdy.com/wordpress/wp-content/uploads/2007/11/dilbert-xp02.gif
A: Bit old but still one of my favs:
http://www.ok-cancel.com/strips/okcancel20031003.gif
A: http://www.ok-cancel.com/strips/okcancel20031010.gif
A: Another great Foxtrot comic. Possibly the most incomprehensible-to-non-geeks comic to sneak into the newspaper funnies.
A: The Contiki Strip
A comic set in a small Norwegian software company. All text in English. Check it out at http://contikistrip.kjempekjekt.com
Contiki Strip http://contikistrip.kjempekjekt.com/images/contikistrip5.png
Contiki Strip http://contikistrip.kjempekjekt.com/images/contikistrip7.png
A: "Helping" out fellow programming co-workers
A: This is my personal favorite... and the name of all my alpha releases!
alt text http://www.phdcomics.com/comics/archive/phd071009s.gif
original link
A:
A: Oh! There can be only one:
It's sooo funny, because it's true :)
A: I love this one:
A: alt text http://www.unt.edu/benchmarks/archives/2004/february04/screwupcolor.gif
This is my everyday.
A:
A: (it's how people feel )
A: Old black and white version (there's nothing new under the sun) http://www.lore.ua.ac.be/Teaching/SE3BAC/SoftwareSpecCartoon.gif
A: The very first bug bash is great!
http://www.bugbash.net/strips/bug-bash20050521.gif
A: alt text http://i44.tinypic.com/f07yh3.jpg
A: alt text http://www.phdcomics.com/comics/archive/phd101305s.gif
alt text http://www.phdcomics.com/comics/archive/phd101505s.gif
alt text http://www.phdcomics.com/comics/archive/phd101705s.gif
A: The 1337 set of comics from xkcd, starting with:
A:
A: alt text http://www.phdcomics.com/comics/archive/phd011406s.gif
A: Real Programmers Code in Binary http://zeljkofilipin.com/wp-content/uploads/2008/02/real-programmers-code-in-binary.jpg
According to http://homepages.strath.ac.uk/~cjbs17/computing/binary.html author of this image is Chris Kania and original is at http://www.kaniamania.com/html/1190.html but the entire site is down at the moment.
A:
A: http://shortminds.com/comics/2008-09-19.jpg
Check out Shortminds.com for more
A:
A: How about a math cartoon?
alt text http://catb.org/jargon/html/graphics/73-05-18.png
A: Religious debates are always fun!
A: .
A: Check this From Apple Desicion on APP.
A:
A: Can't help but love it...
A: alt text http://www.userfriendly.org/cartoons/archives/08jun/uf011627.gif
A: Made for grad students, but applies equally to programmers
alt text http://www.phdcomics.com/comics/archive/phd012609s.gif
PHD Comics - Brain on a Stick
A:
A:
A:
A: "Emacs Thumb" from User Friendly
Emacs Thumb http://www.userfriendly.org/cartoons/archives/07sep/uf010710.gif
A: alt text http://wondermark.com/comics/190.gif
A: http://hackles.org/cgi-bin/archives.pl?request=75
Hackles http://hackles.org/strips/cartoon75.png
A: Babbage and Lovelace Vs The Client
Includes historical notes
*
*Part 1
*Part 2
A: Of course, xkcd!
A:
a nice collection
A: I'm surprised this one was missing:
A:
A:
A: My all time favorite is probably xkcd's "Sudo make me a sandwich" comic, but there are SO MANY good webcomics out there that I thought I'd throw some others out for fun:
A co-worker pointed me to Sticks and Stones, which just got started pretty recently. It's sort of an xkcd ripoff, but there's some good stuff in there.
alt text http://www.arcanology.net/sticksandstones/comics/comic-10.gif
Hackles is frequently about programming. This one's probably my favorite:
Hackles, by Drake Emko & Jen Brodzik http://hackles.org/strips/cartoon334.png
It's not quite a programming comic, but I also really dug this strip from Full Frontal Nerdery:
Full Frontal Nerdity by Aaron Williams http://nodwick.humor.gamespy.com/ffn/strips/2008-03-12.jpg
A: C++ forest
A: http://www.qwantz.com/comics/comic2-172.png
(sequel to this)
A: I knew it was true. XKCD #224
"We lost the documentation on quantum mechanics. You'll have to decode the regexes yourself."
A: Baseline Expectations
Taken from Dilbert.com, Sept 12 2008
A: http://ihooky.com/comics/ihooky06.png
A:
makes me lol
link
A: I can't believe 6 pages of answers and no one mentions the banana jr 6000. How quickly we forget.
http://img17.imageshack.us/img17/3108/bananaadgz4.gif
The other one I love and couldn't find the image for has the punchline "Failure Mr. Jones, is hardly original. Now sit down."
Also from the Bloom County strip.
A: I didn't see this one which is one of my favorite xkcd :
(His books were kinda intimidating; rappelling down through his skylight seemed like the best option.)
A: alt text http://wondermark.com/comics/352.gif
A: Hello my favorit is this one
alt text http://curtis.lassam.net/comics/programmers.png
A: I can't believe someone hasn't put this one:
A:
A: alt text http://www.webcomicsnation.com/memberimages/20070107.png
A: alt text http://www.geekherocomic.com/comics/2009-07-31-points-of-view.png
A:
XKCD hits the button every time.
A:
(I hear this is an option in the latest Ubuntu release.)
A: alt text http://contikistrip.kjempekjekt.com/images/contikistrip3.png
A: What?! No PhDComics? Check this out:
alt text http://www.phdcomics.com/comics/archive/phd1112.gif
Remember this was 1997!!
Also read the following strips... Hilarious!
A: I'm going to show my age with the this 1993 classic: On the Internet, nobody knows you're a dog.
A: I didn't know...
A: http://www.irregularwebcomic.net/75.html http://www.irregularwebcomic.net/comics/irreg0075.jpg
Edited for grammar :P
A: not directly something about programming, but i guess every programmer knows flowcharts xD
A: Another one from xkcd
A: alt text http://img183.imageshack.us/img183/1350/pizzasv6.png
A: On victory:
A: http://cache.g4tv.com/images/ttv/graphics/thescreensavers/3546989.jpg
alt text http://cache.g4tv.com/images/ttv/graphics/thescreensavers/3546989.jpg
A: alt text http://i40.tinypic.com/12649i1.png
A: Lately I started reading Geek Hero Comic. My favorite parts so far:
alt text http://www.geekherocomic.com/comics/2008-10-17-How-to-improve-productivity.png
alt text http://www.geekherocomic.com/comics/2008-10-21-The-art-of-programming.png
A: alt text http://wondermark.com/comics/128.gif
(guess you have to work on sev zero bugs to get this one)
A: xkcd:
A:
A: Dilbert http://www.laputan.org/images/pictures/elbonia-900406.gif
This is something I found reading about anti-patterns.
A: alt text http://www.bugbash.net/strips/bug-bash20060227.gif
A: This is old, I'm totally not sure about the permissions and I'm ready to remove it.alt text http://img120.imageshack.us/img120/8456/easycfm4.jpg
A: "When I started programming, we didn't have any of these sissy 'icons' and 'Windows.' All we had were zeros and ones -- and sometimes we didn't even have ones. I wrote an entire database program using only zeros." "You had zeros? We had to use the letter 'O'." -Dilbert (Scott Adams) http://www.mscha.net/tmp/dt19920908.gif
A:
A: Dilbert is the top favorite, but I've also really enjoyed the xkcd comics the last couple years. I've got a couple of those posted up in my cube... I try really hard to live by this one.
A: I think the IT consultant's image is the best:
Reasons why people who work with computers have a lot of spare time
(linking because the picture's pretty big)
A:
https://brainstuck.com/2008/08/08/t-c/
I'm guessing that this would probably never happen, as agreeing to the terms and conditions is more important than actually reading them (from the software publisher perspective).
A:
A: Just had to put this here
http://www.implementingscrum.com/images/070806-scrumtoon.jpg
From here
A: Ten reasons you know you're living in 2009
*
*You accidentally enter your password on the microwave.
*You haven't played solitaire with real cards in years.
*You have a list of 15 phone numbers to reach your family of 3.
*You e-mail the person who works at the desk next to you.
*Your reason for not staying in touch with friends and family is that they don't have e-mail addresses.
*You pull up in your own driveway and use your cell phone to see if anyone is home.
*Every commercial on television has a web site at the bottom of the screen.
*Leaving the house without your cell phone, which you didn't have the first 20 or 30 (or 60) years of your life, is now a cause for panic and you turn around to go and get it.
*You get up in the morning and go online before getting your coffee.
*You're reading this and nodding and laughing. :)
A: One of my favourite :)
A:
A: Doctor Fun... http://www.ibiblio.org/Dave/Dr-Fun/df200002/df20000210.jpg
A:
If all bugs were so easy to close.
A: One more which I liked: No offences.. :)
how it works
A:
http://xkcd.com/518/
A: You vs Technology http://www.weblogcartoons.com/cartoons/you-v-tech.gif
A: For an Agile shop ... THIS is fantastic... As a dev, it just speaks volumes.
Great Fun: from Implementing Scrum by Clark & vizdos.
A: SPOILER ALERT!!!
A: alt text http://img17.imageshack.us/img17/879/billgateslastmoments.jpg
A:
This is from Steve Yegge talking about the verbosity of Java.
A: For those who have worked at Accenture www.bigtimeconsulting.com has some awesome ones...
Here are just a few:
Bold New Changes
Advice
Dinner
Hit and Run
Tech Support
Quitter
A:
A: http://wondermark.com/c/2008-11-25-464forgot.gif
A: in which we discuss the philosophy of computer science and programming architecture by the seemingly-defunct Standard Out:
in which we discuss the philosophy of computer science and programming architecture http://lh3.ggpht.com/tremendotron/SKFBIbEccOI/AAAAAAAAAI0/qOTa3XJZHho/in%20which%20we%20discuss%20.jpg
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "836"
}
|
Q: How to deploy SQL Reporting 2005 when Data Sources are locked? The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files.
However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me.
When I attempt to deploy from VS, it gives me the error
The item '/Data Sources' already exists.
I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct.
I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?
A: Add a string parameter called ConnectionString to your report and save it. Next, open your RDL in a text editor and change the data source definitions like this.
<DataSources>
<DataSource Name="preserve the datasource name you've been using">
<ConnectionProperties>
<DataProvider>SQL</DataProvider>
<ConnectString>=Parameters!ConnectionString.Value</ConnectString>
</ConnectionProperties>
<rd:DataSourceID>preserve your existing GUID</rd:DataSourceID>
</DataSource>
</DataSources>
You will now find that you can pass in the database connection string as a report parameter. Be careful not to mention this to your DBAs because there is no provision in the SSRS security system for controlling this and they will go completely nuts when they discover that not only is the cage door open, it can't be closed.
A: You will receive the warning if the properties on the data source are such that they do not allow you to overwrite the data source. However, the rest of your project or report should deploy. Check the properties of your report and I think that you will find that it is the current version. This is only a warning and this is not fatal to your report deployment.
If your deployment is failing because of some security issue with the data source then delete it and the rest of your project should deploy. VS will deploy the reports or models even if you get an error on the data source. If the project will still not deploy then the problem is not your data source.
A: Forgive the thread necromancy, but this is what came up in Google.
While I can't take credit for the answer, I suspect this is what's actually going on
http://social.msdn.microsoft.com/Forums/en-US/sqlreportingservices/thread/33423ef3-4a28-4c1d-aded-eac33770659d
I'm having the same problem and my DBA is setting me up now.
A: Did you check the configuration you are setting the OverwriteDataSource project property setting to False? The default Configuration is Active(DebugLocal) but you may have to set the OverwriteDataSource setting to False for another Configuration such as Production. You can use the All Configurations to force the OverwriteDataSource setting to False for all deployments.
A: This should only be a warning (We have the same situation at my shop). The only way I can imagine this would cause an error is if you had Visual Studio set to treat warnings as errors (It would then not deploy because of the error). Try changing this option if you have it set.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Error handling reporting methods with ASP.NET 2.0 / C# Does anyone know of an open source module or a good method for handling application errors and e-mailing them to an admin and/or saving to a database?
A: ELMAH is a great drop-in tool for this. DLL in the bin directory, and some markup to add to the web.config and you're done.
A: log4net can save errors to a database or send emails. We use this at my job (despite the fact it caused Jeff Atwood much stress in the SO beta). Catch the errors in the global.asax page in the Application Error method.
A: You should look into the application_error method in the global.asax page to catch the errors, and into either log4net and the enterprise library to log those errors in whatever form you choose to any provider you choose -- such as database or e-mail.
A: log4net is open source, fairly easy to learn, and can be configured to log to multiple sources (file, email, db, eventlog) very easily.
A: CALM does this and more. It's not free or open source, but it is inexpensive and comprehensive.
caveat: I am the author of CALM
A: MS Enterprise Library definitely offers this functionality. It has a little bit of a learning curve, but once it's set up, it is pretty easy to maintain. It also offers several different logging sources, but is not open source.
A: I have a free provider as well that is available from my website that is a drop in solution for error handling.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Keep pagination repeatable if change operations are performed If one wants to paginate results from a data source that supports pagination we have to go to a process of:
*
*defining the page size - that is the number of results to show per page;
*fetch each page requested by the user using an offset = page number (0 based) * page size
*show the results of the fetched page.
All this is works just fine not considering the fact that an operation may affect the backend system that screws up the pagination taking place. I am talking about someone inserting data between page fetches or deleting data.
page_size = 10;
get page 0 -> results from 0 to 9;
user inserts a record that due to the query being executed goes to page 0 - the one just shown;
get page 1 -> results from 10 to 19 - the first results on the page is the result on the old page 0.
The described behavior can cause confusion to the viewer. Do you know any practical solution to workaround this problem.
A: There are a few schools of thought o this.
*
*data gets updated let it be
*You could implement some sort of caching method that will hold the
entire result set (This might not be
an option if working with really
large Datasets)
*You could do a comparison on each page operation and notify the
user if the total record count
changes
.
A: If the updates you are concerned with are primarily new insertions (for example, StackOverflow itself seems to suffer from this problem when paging through questions and new questions come in) one way to handle it is to capture a timestamp when you issue the first pagination query, and then limit the results of requests for subsequent pages to items which existed before that timestamp.
A: As long as users understand that the underlying data is always changing, they won't be confused. So just do it the straightforward way.
You could cache the first few pages of the result and use that for subsequent views, but then the results will be out of sync with the database, which is even more confusing.
A: One option is to exclude new incoming data objects from the result. This could be done with the session start time. You could add this for example to your JWT and then have a similar behavior like Twitter (14 new Twitts).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Passing EXE data down to one or more DLLs Our current application is a single OpenGL EXE containing multiple pages. The EXE is responsible for accessing data sent across the network via UDP. It accumulates the data and stores it in a host of singleton structures. The individual pages within the EXE access the singleton structures to process the data as they see fit.
In an attempt to lighten our EXE footprint and to support our attempts at configuration management, we have decided to split the pages out into a single DLL that the EXE will load. It is our intention to have the EXE be the shell into which the pages from the DLL will be loaded. The EXE will still have all communication responsibilities (UDP, Corba, User, etc). The pages will still be responsible for displaying whatever it is they do.
The question (finally) becomes: How do I pass this myriad of data collected from the EXE down to the consuming DLL based pages. The Singleton concept no longer holds water as the singletons we use (ACE_Singleton) do not allow this level of direction. We can export singletons from the DLL to the consuming EXE all day long, but I have yet to figure out the reverse. I have come up with the following options - none of which I like, so I was hoping someone out there would have a better one :)
*
*Wrap up all the data that is currently stored in separate singletons into another DLL that would export a "true" singleton. Eg. The singleton exported from the DLL would be the same - no matter what EXE loaded it - sort of like shared memory. This is an intriguing choice but would cause problems down the line with our deployment scenarios. I could go into detail about those issues if folks are really smitten with this idea.
*Create a static DLL level structure that contains all the pertinent data. The EXE would push this data down to the DLL upon DLL load so that the pages contained within the DLL would have access to the data. This seems like the simplest solution - even if it entails editing every single page in our application - over 100. It also seems a bit sloppy. All the data is just in a global. Not very sexy or C++y either.
So, anyone else out there have a solution for this problem?
The application is written using Visual C++ 9.0 (VisualStudio 2008) for use on Windows XP. For some reason Vista is not supported in our lab yet - even though our customers are using it.
A: Give all the DLLs a function SetGlobalDataPointer(Singleton*). Your EXE calls this function before it calls any other DLL function. In the DLL code, replace all occurances of theSingleton. by theSingletonPtr->
A: You could either:
*
*put everything bar the very outermost shell into a 'common' DLL;
*use a DEF file to generate export functions from your EXE.
The second is very uncommon but it is possible to generate an import library just from the DEF file. Use LIB /DEF to generate the import library. See Working with Import Libraries and Export Files.
A: Unfortunately, it sounds like you've got a lot of existing code to tinker with. In that case, I'd just go with (2), assuming it doesn't get too large and unwieldy.
From your description, it sounds like data at the EXE level only needs to be sent once on DLL load.
If (2) is too messy, I'd refactor it a little to have a base "DLLPage" class with Serialize/UnSerialize() functions. Never export the class itself, only the individual functions you need in it (this helps tremendously as the class changes... very weird breaks happen with class-level exports). You'll need the constructor/destructors, and probably every public member.
There could be some heap management issues, so I also like to overload new/delete and have all classes use a centralized new/delete located in a helper DLL.
A: Before you take the plunge breaking the EXE up, you should read up on memory management and DLLs.
Here is one article that talks about CRT object issues, but the same thing applies to your own C++ objects.
Potential Errors Passing CRT Objects Across DLL Boundaries
A: First option: place all the data that the exe holds in shared memory. The dlls can then happily access it, as long as you have proper locking in place.
Second option: transfer the memory to the dlls using an exported function pointer - the exe has a function, the dll calls another function in the exe that returns this function as a pointer, which the dll can then call. That exported function can transfer data as a normal structure on the stack.
Third option: if you use the same runtime, just export a pointer that gives you direct access to the memory.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How To Read Active Directory Group Membership From PHP/IIS using COM? I have the following code:
$bind = new COM("LDAP://CN=GroupName,OU=Groups,OU=Division,DC=company,DC=local");
When I execute it from a command-prompt, it runs fine. When it runs under IIS/PHP/ISAPI, it barfs.
Fatal error: Uncaught exception 'com_exception' with message 'Failed to create COM object `LDAP://CN=...[cut]...,DC=local':
An operations error occurred. ' in index.php
Stack trace:
#0 index.php: com->com('LDAP://CN=...')
#1 {main} thrown
IIS is configured for Windows Authentication (no anonymous, no basic, no digest) and I am connecting as the same user as the command prompt. I cannot find any specific errors in the IIS logfiles or the eventlog.
The main purpose of this exercise is to refrain from keeping user credentials in my script and relying on IIS authentication to pass them through to the active directory. I understand that you can use LDAP to accomplish the same thing, but as far as I know credentials cannot be passed through.
Perhaps it is in some way related to the error I get when I try to port it to ASP. I get error 80072020 (which I'm currently looking up).
The event logs show nothing out of the ordinary. No warnings, no errors. Full security auditing is enabled (success and failure on every item in the security policy), and it shows successful Windows logons for every user I authenticate against the web page (which is expected.)
A: Since you're using Windows Authentication in IIS, you may have some security events in the Windows Event log. I would check the Event log for Security Events as well as Application Events and see if you're hitting any sort of permissions issues.
Also, since you're basically just communicating to AD via LDAP...you might look into using the a native LDAP library for PHP rather than a COM.
You'll have to enable the extension probably in your php.ini. Worth looking at probably.
A: It seems to be working now.
I enabled "Trust this computer for delegation" for the computer object in Active Directory. Normally IIS cannot both authenticate you and then subsequently impersonate you across the network (in my case to a domain controller to query Active Directory) without the delegation trust enabled.
You just have to be sure that it's authenticating using Kerberos and not NTLM or some other digest authentication because the digest is not trusted to use as an impersonation token.
It fixed both my PHP and ASP scripts.
A: Well, if you want to use LDAP, let me point you to the LDAP authentication code we use for Maia Mailguard: look for the function named lauth_ldap
I think it requires ldap version 3, so you have to set that parameter for ldap. To verify the password, we use the ldap bind function to let the ldap server authenticate.
A: I'm no AD/COM/IIS expert, but it could be a permissions problem. e.g the IUSR_computername user does not have applicable access within the directory, or you're not binding as a specific user?
The alarm bell for me is the fact it runs ok from command line (e.g. running with your permissions) but fails on IIS (e.g. not your permissions).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Hibernate Query By Example and Projections To make it short: hibernate doesn't support projections and query by example? I found this post:
The code is this:
User usr = new User();
usr.setCity = 'TEST';
getCurrentSession().createCriteria(User.class)
.setProjection( Projections.distinct( Projections.projectionList()
.add( Projections.property("name"), "name")
.add( Projections.property("city"), "city")))
.add( Example.create(usr))
Like the other poster said, The generated sql keeps having a where class refering to just y0_= ? instead of this_.city.
I already tried several approaches, and searched the issue tracker but found nothing about this.
I even tried to use Projection alias and Transformers, but it does not work:
User usr = new User();
usr.setCity = 'TEST';
getCurrentSession().createCriteria(User.class)
.setProjection( Projections.distinct( Projections.projectionList()
.add( Projections.property("name"), "name")
.add( Projections.property("city"), "city")))
.add( Example.create(usr)).setResultTransformer(Transformers.aliasToBean(User.class));
Has anyone used projections and query by example ?
A: The real problem here is that there is a bug in hibernate where it uses select-list aliases in the where-clause:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-817
Just in case someone lands here looking for answers, go look at the ticket. It took 5 years to fix but in theory it'll be in one of the next releases and then I suspect your issue will go away.
A: The problem seems to happen when you have an alias the same name as the objects property. Hibernate seems to pick up the alias and use it in the sql. I found this documented here and here, and I believe it to be a bug in Hibernate, although I am not sure that the Hibernate team agrees.
Either way, I have found a simple work around that works in my case. Your mileage may vary. The details are below, I tried to simplify the code for this sample so I apologize for any errors or typo's:
Criteria criteria = session.createCriteria(MyClass.class)
.setProjection(Projections.projectionList()
.add(Projections.property("sectionHeader"), "sectionHeader")
.add(Projections.property("subSectionHeader"), "subSectionHeader")
.add(Projections.property("sectionNumber"), "sectionNumber"))
.add(Restrictions.ilike("sectionHeader", sectionHeaderVar)) // <- Problem!
.setResultTransformer(Transformers.aliasToBean(MyDTO.class));
Would produce this sql:
select
this_.SECTION_HEADER as y1_,
this_.SUB_SECTION_HEADER as y2_,
this_.SECTION_NUMBER as y3_,
from
MY_TABLE this_
where
( lower(y1_) like ? )
Which was causing an error: java.sql.SQLException: ORA-00904: "Y1_": invalid identifier
But, when I changed my restriction to use "this", like so:
Criteria criteria = session.createCriteria(MyClass.class)
.setProjection(Projections.projectionList()
.add(Projections.property("sectionHeader"), "sectionHeader")
.add(Projections.property("subSectionHeader"), "subSectionHeader")
.add(Projections.property("sectionNumber"), "sectionNumber"))
.add(Restrictions.ilike("this.sectionHeader", sectionHeaderVar)) // <- Problem Solved!
.setResultTransformer(Transformers.aliasToBean(MyDTO.class));
It produced the following sql and my problem was solved.
select
this_.SECTION_HEADER as y1_,
this_.SUB_SECTION_HEADER as y2_,
this_.SECTION_NUMBER as y3_,
from
MY_TABLE this_
where
( lower(this_.SECTION_HEADER) like ? )
Thats, it! A pretty simple fix to a painful problem. I don't know how this fix would translate to the query by example problem, but it may get you closer.
A: Can I see your User class? This is just using restrictions below. I don't see why Restrictions would be really any different than Examples (I think null fields get ignored by default in examples though).
getCurrentSession().createCriteria(User.class)
.setProjection( Projections.distinct( Projections.projectionList()
.add( Projections.property("name"), "name")
.add( Projections.property("city"), "city")))
.add( Restrictions.eq("city", "TEST")))
.setResultTransformer(Transformers.aliasToBean(User.class))
.list();
I've never used the alaistToBean, but I just read about it. You could also just loop over the results..
List<Object> rows = criteria.list();
for(Object r: rows){
Object[] row = (Object[]) r;
Type t = ((<Type>) row[0]);
}
If you have to you can manually populate User yourself that way.
Its sort of hard to look into the issue without some more information to diagnose the issue.
A: I'm facing a similar problem. I'm using Query by Example and I want to sort the results by a custom field. In SQL I would do something like:
select pageNo, abs(pageNo - 434) as diff
from relA
where year = 2009
order by diff
It works fine without the order-by-clause. What I got is
Criteria crit = getSession().createCriteria(Entity.class);
crit.add(exampleObject);
ProjectionList pl = Projections.projectionList();
pl.add( Projections.property("id") );
pl.add(Projections.sqlProjection("abs(`pageNo`-"+pageNo+") as diff", new String[] {"diff"}, types ));
crit.setProjection(pl);
But when I add
crit.addOrder(Order.asc("diff"));
I get a org.hibernate.QueryException: could not resolve property: diff exception. Workaround with this does not work either.
PS: as I could not find any elaborate documentation on the use of QBE for Hibernate, all the stuff above is mainly trial-and-error approach
A: ProjectionList pl = Projections.projectionList();
pl.add(Projections.property("id"));
pl.add(Projections.sqlProjection("abs(`pageNo`-" + pageNo + ") as diff", new String[] {"diff"}, types ), diff); ---- solution
crit.addOrder(Order.asc("diff"));
crit.setProjection(pl);
A: I do not really think so, what I can find is the word "this." causes the hibernate not to include any restrictions in its query, which means it got all the records lists. About the hibernate bug that was reported, I can see it's reported as fixed but I totally failed to download the Patch.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: How can I avoid the warning fom an unused parameter in PLSQ? Sometimes, in PL SQL you want to add a parameter to a Package, Function or Procedure in order to prepare future functionality. For example:
create or replace function doGetMyAccountMoney( Type_Of_Currency IN char := 'EUR') return number
is
Result number(12,2);
begin
Result := 10000;
IF char <> 'EUR' THEN
-- ERROR NOT IMPLEMENTED YET
END IF;
return(Result);
end doGetMyAccountMoney;also
It can lead to lots of warnings like
Compilation errors for FUNCTION APPUEMP_PRAC.DOGETMYACCOUNTMONEY
Error: Hint: Parameter 'Currency' is declared but never used in 'doGetMyAccountMoney'
Line: 1
What would be the best way to avoid those warnings?
A: I believe that this is controlled by the parameter PLSQL_WARNINGS, documented for 10gR2 here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams166.htm#REFRN10249
A: If you didn't have the ability to alter the warning levels, you could just bind the parameter value to a dummy value and document that they are for future use.
A: Well, your example has several errors. Most importantly, you would need to change "char" to "Currency" in the IF statement; which as far as I can see would avoid the warning as well.
A: Disable non-severe PL/SQL warnings:
ALTER SESSION SET PLSQL_WARNINGS='ENABLE:SEVERE';
A: Well, are you sure you have the name and the right in the correct order in that declaration?
It complains about a parameter named 'Currency', but you aren't actually using it, are you?
On the other hand, you are using something called char, what is that?
Or perhaps my knowledge of PL/SQL is way off, if so, leave a comment and I'll delete this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Understanding how Ada serializes a record I would like to be able to predict what will be in the resulting binary when I call Write in Ada to serialize a record. Do you know where I can look this up?
I have some legacy Ada software that produces a binary file by Write-ing a record, and I need to debug a C++ program that is supposed to write a compatible binary file. So, I would like to understand what rules Ada follows when it serializes a record, so that I can make sure that the C++ code will produce a functionally equivalent record.
A: The format of the serialised output of 'Write has absolutely nothing to do with representation clauses.
By default, the compiler will output record components without alignment padding in the order in which they're written in the record declaration, using a translation scheme that isn't defined by the standard (so you may not get interoperability between compilers). GNAT (the GCC Ada compiler) outputs each component in a whole number of bytes.
If you want to stream values of a type using some different format, you can override 'Write for the type. As an unusual example, you could stream to XML.
A: Basically, the compiler will reorder the components of your record types, unless you use the pragma PACK or the pragma PRESERVE_LAYOUT commands with your record types. Also, the compiler will pad objects to maintain the alignment of record components. Components follow:
Integer: 8, 16, or 32 bit twos-complement signed numbers
Float: 32-bit IEEE format
Long_Float: 64-bit IEEE format
Fixed-Point: 8, 16, or 32 bit; however, the range and delta specified can affect being 16 or 32
Enumerations: Integer, usually first element is represented by 0
Booleans: Enumeration object, 8 bits long, The LSB stores the value: 0 = false, 1 = true
Characters: Enumeration object, 8 bits long, unsigned 0 through 127
Access Types: 32 bits, 32-bit value of 0 represents NULL
Arrays: stored contiguously in row-major order, size depends on base type. The array is padded to ensure all elements have the proper alignment for their types.
A: As mentioned by others, without additional instruction the compiler will make its own decisions about record layout. The best approach would be to change the original code to write the record using a specific layout. In particular, the record representation clause allows the Ada programmer to specify exactly the physical layout for a record. In fact, you should check to see whether the original code has one of these for the type in question. If it does, then this would answer your question precisely.
A: The Ada95 Language Reference Manual says (section 13.13.2):
"For elementary types, the representation in terms of stream elements is implementation defined. For composite types, the Write or Read attribute for each component is called in a canonical order. The canonical order of components is last dimension varying fastest for an array, and positional aggregate order for a record."
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do you authenticate against an Active Directory server using Spring Security? I'm writing a Spring web application that requires users to login. My company has an Active Directory server that I'd like to make use of for this purpose. However, I'm having trouble using Spring Security to connect to the server.
I'm using Spring 2.5.5 and Spring Security 2.0.3, along with Java 1.6.
If I change the LDAP URL to the wrong IP address, it doesn't throw an exception or anything, so I'm wondering if it's even trying to connect to the server to begin with.
Although the web application starts up just fine, any information I enter into the login page is rejected. I had previously used an InMemoryDaoImpl, which worked fine, so the rest of my application seems to be configured correctly.
Here are my security-related beans:
<beans:bean id="ldapAuthProvider" class="org.springframework.security.providers.ldap.LdapAuthenticationProvider">
<beans:constructor-arg>
<beans:bean class="org.springframework.security.providers.ldap.authenticator.BindAuthenticator">
<beans:constructor-arg ref="initialDirContextFactory" />
<beans:property name="userDnPatterns">
<beans:list>
<beans:value>CN={0},OU=SBSUsers,OU=Users,OU=MyBusiness,DC=Acme,DC=com</beans:value>
</beans:list>
</beans:property>
</beans:bean>
</beans:constructor-arg>
</beans:bean>
<beans:bean id="userDetailsService" class="org.springframework.security.userdetails.ldap.LdapUserDetailsManager">
<beans:constructor-arg ref="initialDirContextFactory" />
</beans:bean>
<beans:bean id="initialDirContextFactory" class="org.springframework.security.ldap.DefaultInitialDirContextFactory">
<beans:constructor-arg value="ldap://192.168.123.456:389/DC=Acme,DC=com" />
</beans:bean>
A: I had the same banging-my-head-against-the-wall experience you did, and ended up writing a custom authentication provider that does an LDAP query against the Active Directory server.
So my security-related beans are:
<beans:bean id="contextSource"
class="org.springframework.security.ldap.DefaultSpringSecurityContextSource">
<beans:constructor-arg value="ldap://hostname.queso.com:389/" />
</beans:bean>
<beans:bean id="ldapAuthenticationProvider"
class="org.queso.ad.service.authentication.LdapAuthenticationProvider">
<beans:property name="authenticator" ref="ldapAuthenticator" />
<custom-authentication-provider />
</beans:bean>
<beans:bean id="ldapAuthenticator"
class="org.queso.ad.service.authentication.LdapAuthenticatorImpl">
<beans:property name="contextFactory" ref="contextSource" />
<beans:property name="principalPrefix" value="QUESO\" />
</beans:bean>
Then the LdapAuthenticationProvider class:
/**
* Custom Spring Security authentication provider which tries to bind to an LDAP server with
* the passed-in credentials; of note, when used with the custom {@link LdapAuthenticatorImpl},
* does <strong>not</strong> require an LDAP username and password for initial binding.
*
* @author Jason
*/
public class LdapAuthenticationProvider implements AuthenticationProvider {
private LdapAuthenticator authenticator;
public Authentication authenticate(Authentication auth) throws AuthenticationException {
// Authenticate, using the passed-in credentials.
DirContextOperations authAdapter = authenticator.authenticate(auth);
// Creating an LdapAuthenticationToken (rather than using the existing Authentication
// object) allows us to add the already-created LDAP context for our app to use later.
LdapAuthenticationToken ldapAuth = new LdapAuthenticationToken(auth, "ROLE_USER");
InitialLdapContext ldapContext = (InitialLdapContext) authAdapter
.getObjectAttribute("ldapContext");
if (ldapContext != null) {
ldapAuth.setContext(ldapContext);
}
return ldapAuth;
}
public boolean supports(Class clazz) {
return (UsernamePasswordAuthenticationToken.class.isAssignableFrom(clazz));
}
public LdapAuthenticator getAuthenticator() {
return authenticator;
}
public void setAuthenticator(LdapAuthenticator authenticator) {
this.authenticator = authenticator;
}
}
Then the LdapAuthenticatorImpl class:
/**
* Custom Spring Security LDAP authenticator which tries to bind to an LDAP server using the
* passed-in credentials; does <strong>not</strong> require "master" credentials for an
* initial bind prior to searching for the passed-in username.
*
* @author Jason
*/
public class LdapAuthenticatorImpl implements LdapAuthenticator {
private DefaultSpringSecurityContextSource contextFactory;
private String principalPrefix = "";
public DirContextOperations authenticate(Authentication authentication) {
// Grab the username and password out of the authentication object.
String principal = principalPrefix + authentication.getName();
String password = "";
if (authentication.getCredentials() != null) {
password = authentication.getCredentials().toString();
}
// If we have a valid username and password, try to authenticate.
if (!("".equals(principal.trim())) && !("".equals(password.trim()))) {
InitialLdapContext ldapContext = (InitialLdapContext) contextFactory
.getReadWriteContext(principal, password);
// We need to pass the context back out, so that the auth provider can add it to the
// Authentication object.
DirContextOperations authAdapter = new DirContextAdapter();
authAdapter.addAttributeValue("ldapContext", ldapContext);
return authAdapter;
} else {
throw new BadCredentialsException("Blank username and/or password!");
}
}
/**
* Since the InitialLdapContext that's stored as a property of an LdapAuthenticationToken is
* transient (because it isn't Serializable), we need some way to recreate the
* InitialLdapContext if it's null (e.g., if the LdapAuthenticationToken has been serialized
* and deserialized). This is that mechanism.
*
* @param authenticator
* the LdapAuthenticator instance from your application's context
* @param auth
* the LdapAuthenticationToken in which to recreate the InitialLdapContext
* @return
*/
static public InitialLdapContext recreateLdapContext(LdapAuthenticator authenticator,
LdapAuthenticationToken auth) {
DirContextOperations authAdapter = authenticator.authenticate(auth);
InitialLdapContext context = (InitialLdapContext) authAdapter
.getObjectAttribute("ldapContext");
auth.setContext(context);
return context;
}
public DefaultSpringSecurityContextSource getContextFactory() {
return contextFactory;
}
/**
* Set the context factory to use for generating a new LDAP context.
*
* @param contextFactory
*/
public void setContextFactory(DefaultSpringSecurityContextSource contextFactory) {
this.contextFactory = contextFactory;
}
public String getPrincipalPrefix() {
return principalPrefix;
}
/**
* Set the string to be prepended to all principal names prior to attempting authentication
* against the LDAP server. (For example, if the Active Directory wants the domain-name-plus
* backslash prepended, use this.)
*
* @param principalPrefix
*/
public void setPrincipalPrefix(String principalPrefix) {
if (principalPrefix != null) {
this.principalPrefix = principalPrefix;
} else {
this.principalPrefix = "";
}
}
}
And finally, the LdapAuthenticationToken class:
/**
* <p>
* Authentication token to use when an app needs further access to the LDAP context used to
* authenticate the user.
* </p>
*
* <p>
* When this is the Authentication object stored in the Spring Security context, an application
* can retrieve the current LDAP context thusly:
* </p>
*
* <pre>
* LdapAuthenticationToken ldapAuth = (LdapAuthenticationToken) SecurityContextHolder
* .getContext().getAuthentication();
* InitialLdapContext ldapContext = ldapAuth.getContext();
* </pre>
*
* @author Jason
*
*/
public class LdapAuthenticationToken extends AbstractAuthenticationToken {
private static final long serialVersionUID = -5040340622950665401L;
private Authentication auth;
transient private InitialLdapContext context;
private List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>();
/**
* Construct a new LdapAuthenticationToken, using an existing Authentication object and
* granting all users a default authority.
*
* @param auth
* @param defaultAuthority
*/
public LdapAuthenticationToken(Authentication auth, GrantedAuthority defaultAuthority) {
this.auth = auth;
if (auth.getAuthorities() != null) {
this.authorities.addAll(Arrays.asList(auth.getAuthorities()));
}
if (defaultAuthority != null) {
this.authorities.add(defaultAuthority);
}
super.setAuthenticated(true);
}
/**
* Construct a new LdapAuthenticationToken, using an existing Authentication object and
* granting all users a default authority.
*
* @param auth
* @param defaultAuthority
*/
public LdapAuthenticationToken(Authentication auth, String defaultAuthority) {
this(auth, new GrantedAuthorityImpl(defaultAuthority));
}
public GrantedAuthority[] getAuthorities() {
GrantedAuthority[] authoritiesArray = this.authorities.toArray(new GrantedAuthority[0]);
return authoritiesArray;
}
public void addAuthority(GrantedAuthority authority) {
this.authorities.add(authority);
}
public Object getCredentials() {
return auth.getCredentials();
}
public Object getPrincipal() {
return auth.getPrincipal();
}
/**
* Retrieve the LDAP context attached to this user's authentication object.
*
* @return the LDAP context
*/
public InitialLdapContext getContext() {
return context;
}
/**
* Attach an LDAP context to this user's authentication object.
*
* @param context
* the LDAP context
*/
public void setContext(InitialLdapContext context) {
this.context = context;
}
}
You'll notice that there are a few bits in there that you might not need.
For example, my app needed to retain the successfully-logged-in LDAP context for further use by the user once logged in -- the app's purpose is to let users log in via their AD credentials and then perform further AD-related functions. So because of that, I have a custom authentication token, LdapAuthenticationToken, that I pass around (rather than Spring's default Authentication token) which allows me to attach the LDAP context. In LdapAuthenticationProvider.authenticate(), I create that token and pass it back out; in LdapAuthenticatorImpl.authenticate(), I attach the logged-in context to the return object so that it can be added to the user's Spring authentication object.
Also, in LdapAuthenticationProvider.authenticate(), I assign all logged-in users the ROLE_USER role -- that's what lets me then test for that role in my intercept-url elements. You'll want to make this match whatever role you want to test for, or even assign roles based on Active Directory groups or whatever.
Finally, and a corollary to that, the way I implemented LdapAuthenticationProvider.authenticate() gives all users with valid AD accounts the same ROLE_USER role. Obviously, in that method, you can perform further tests on the user (i.e., is the user in a specific AD group?) and assign roles that way, or even test for some condition before even granting the user access at all.
A: For reference, Spring Security 3.1 has an authentication provider specifically for Active Directory.
A: Just to bring this to an up-to-date status.
Spring Security 3.0 has a complete package with default implementations devoted to ldap-bind as well as query and compare authentication.
A: I was able to authenticate against active directory using spring security 2.0.4.
I documented the settings
http://maniezhilan.blogspot.com/2008/10/spring-security-204-with-active.html
A: As in Luke's answer above:
Spring Security 3.1 has an authentication provider specifically for Active Directory.
Here is the detail of how this can be easily done using ActiveDirectoryLdapAuthenticationProvider.
In resources.groovy:
ldapAuthProvider1(ActiveDirectoryLdapAuthenticationProvider,
"mydomain.com",
"ldap://mydomain.com/"
)
In Config.groovy:
grails.plugin.springsecurity.providerNames = ['ldapAuthProvider1']
This is all the code you need. You can pretty much remove all other grails.plugin.springsecurity.ldap.* settings in Config.groovy as they don't apply to this AD setup.
For documentation, see:
http://docs.spring.io/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#ldap-active-directory
A:
If you are using Spring security 4 you can also implement same using
given class
*
*SecurityConfig.java
@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
static final Logger LOGGER = LoggerFactory.getLogger(SecurityConfig.class);
@Autowired
protected void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.authenticationProvider(activeDirectoryLdapAuthenticationProvider());
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/").permitAll()
.anyRequest().authenticated();
.and()
.formLogin()
.and()
.logout();
}
@Bean
public AuthenticationProvider activeDirectoryLdapAuthenticationProvider() {
ActiveDirectoryLdapAuthenticationProvider authenticationProvider =
new ActiveDirectoryLdapAuthenticationProvider("<domain>", "<url>");
authenticationProvider.setConvertSubErrorCodesToExceptions(true);
authenticationProvider.setUseAuthenticationRequestCredentials(true);
return authenticationProvider;
}
}
A: LDAP authentication without SSL is not safe anyone can see the user credential when those are transffered to LDAP server. I suggest using LDAPS:\ protocol for authentication. It doesn't require any major change on spring part but you may ran with some issues related to certificates. See LDAP Active Directory authentication in Spring with SSL for more details
A: From Luke's answer above:
For reference, Spring Security 3.1 has an authentication provider
[specifically for Active Directory][1].
[1]:
http://static.springsource.org/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#ldap-active-directory
I tried the above with Spring Security 3.1.1: there are some slight changes from ldap - the active directory groups the user is a member of come through as original case.
Previously under ldap the groups were capitalized and prefixed with "ROLE_", which made them easy to find with a text search in a project but obviously might case problems in a unix group if for some strange reason had 2 separate groups only differentiated by case(ie accounts and Accounts).
Also the syntax requires manual specification of the domain controller name and port, which makes it a bit scary for redundancy. Surely there is a way of looking up the SRV DNS record for the domain in java, ie equivalent of(from Samba 4 howto):
$ host -t SRV _ldap._tcp.samdom.example.com.
_ldap._tcp.samdom.example.com has SRV record 0 100 389 samba.samdom.example.com.
followed by regular A lookup:
$ host -t A samba.samdom.example.com.
samba.samdom.example.com has address 10.0.0.1
(Actually might need to lookup _kerberos SRV record too...)
The above was with Samba4.0rc1, we are progressively upgrading from Samba 3.x LDAP environment to Samba AD one.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Is there a better way to create an object-oriented class with jQuery? I use the jQuery extend function to extend a class prototype.
For example:
MyWidget = function(name_var) {
this.init(name_var);
}
$.extend(MyWidget.prototype, {
// object variables
widget_name: '',
init: function(widget_name) {
// do initialization here
this.widget_name = widget_name;
},
doSomething: function() {
// an example object method
alert('my name is '+this.widget_name);
}
});
// example of using the class built above
var widget1 = new MyWidget('widget one');
widget1.doSomething();
Is there a better way to do this? Is there a cleaner way to create the class above with only one statement instead of two?
A: I quite like John Resig's Simple JavaScript Inheritance.
var MyWidget = Class.extend({
init: function(widget_name){
this.widget_name = widget_name;
},
doSomething: function() {
alert('my name is ' + this.widget_name);
}
});
NB: The "Class" object demonstrated above isn't included in jQuery itself - it's a 25 line snippet from Mr. jQuery himself, provided in the article above.
A: jQuery doesn't offer that. But Prototype does, via Class.create.
A: Why not just use the simple OOP that JavaScript itself provides...long before jQuery?
var myClass = function(){};
myClass.prototype = {
some_property: null,
some_other_property: 0,
doSomething: function(msg) {
this.some_property = msg;
alert(this.some_property);
}
};
Then you just create an instance of the class:
var myClassObject = new myClass();
myClassObject.doSomething("Hello Worlds");
Simple!
A: To summarise what I have learned so far:
Here is the Base function that makes Class.extend() work in jQuery (Copied from Simple JavaScript Inheritance by John Resig):
// Inspired by base2 and Prototype
(function(){
var initializing = false, fnTest = /xyz/.test(function(){xyz;}) ? /\b_super\b/ : /.*/;
// The base Class implementation (does nothing)
this.Class = function(){};
// Create a new Class that inherits from this class
Class.extend = function(prop) {
var _super = this.prototype;
// Instantiate a base class (but only create the instance,
// don't run the init constructor)
initializing = true;
var prototype = new this();
initializing = false;
// Copy the properties over onto the new prototype
for (var name in prop) {
// Check if we're overwriting an existing function
prototype[name] = typeof prop[name] == "function" &&
typeof _super[name] == "function" && fnTest.test(prop[name]) ?
(function(name, fn){
return function() {
var tmp = this._super;
// Add a new ._super() method that is the same method
// but on the super-class
this._super = _super[name];
// The method only need to be bound temporarily, so we
// remove it when we're done executing
var ret = fn.apply(this, arguments);
this._super = tmp;
return ret;
};
})(name, prop[name]) :
prop[name];
}
// The dummy class constructor
function Class() {
// All construction is actually done in the init method
if ( !initializing && this.init )
this.init.apply(this, arguments);
}
// Populate our constructed prototype object
Class.prototype = prototype;
// Enforce the constructor to be what we expect
Class.constructor = Class;
// And make this class extendable
Class.extend = arguments.callee;
return Class;
};
})();
Once you have run executed this code, then that makes the following code from insin's answer possible:
var MyWidget = Class.extend({
init: function(widget_name){
this.widget_name = widget_name;
},
doSomething: function() {
alert('my name is ' + this.widget_name);
}
});
This is a nice, clean solution. But I'm interested to see if anyone has a solution that doesn't require adding anything to jquery.
A: This is long gone dead, but if anyone else searches for jQuery creating class - check this plugin:
http://plugins.jquery.com/project/HJS
A: I found this website a impressive one for oops in javascript Here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
}
|
Q: .NET Testing Naming Conventions What are the best conventions of naming testing-assemblies in .NET (or any other language or platform)?
What I'm mainly split between are these options (please provide others!):
*
*Company.Website - the project
*Company.Website.Tests
or
*
*Company.Website
*Company.WebsiteTests
The problem with the first solution is that it looks like .Tests are a sub-namespace to the site, while they really are more parallel in my mind. What happens when a new sub-namespace comes into play, like Company.Website.Controls, where should I put the tests for that namespace, for instance?
Maybe it should even be: Tests.Company.Website and Tests.Company.Website.Controls, and so on.
A: I actually have an alternate parallel root.
Tests.Company.Website
It works nicely for disambiguating things when you have new sub namespaces.
A: I'm a big fan of structuring the test namespace like this:
Company.Tests.Website.xxx
Company.Tests.Website.Controls
Like you, I think of the tests as a parallel namespace structure to the main code and this provides you with that. It also has the advantage that, since the namespace still starts with your company name you shouldn't have any naming collisions with 3rd party libraries
A: I will go with
* Company.Website - the project
* Company.Website.Tests
The short reason and answer is simple, testing and project are linked in code, therefore it should share namespace.
If you want splitting of code and testing in a solution you have that option anyway. e.g. you can set up a solution with
-Code Folder
*
*Company.Website
-Tests Folder
*
*Company.Website.Tests
A: I personally would go with
Company.Tests.Website
That way you have a common tests namespace and projects inside it, following the same structure as the actual project.
A: We follow an embedded approach:
Company.Namespace.Test
Company.Namespace.Data.Test
This way the tests are close to the code that is being tested, without having to toggle back and forth between projects or hunt down references to ensure there is a test covering a particular method. We also don't have to maintain two separate, but identical, hierarchies.
We can also test distinct parts of the code as we enhance and develop.
Seems a little weird at first, but over the long term it has worked really well for us.
A: I too prefer "Tests" prefixing the actual name of the assembly so that its easy to see all of my unit test assemblies listed alphabetically together when I mass-select them to pull into NUNit or whatever test harness you are using.
So if Website were the name of my solution (and assemblies), I suggest -
Tests.Website.dll to go along with the actual code assembly Website.Dll
A: I usually name test projects Project-Tests for brevity in Solution Explorer, and I use Company.Namespace.Tests for namespaces.
A: I prefer to go with:
Company.Website.Tests
I don't care about any sub-namespaces like Company.Website.Controls, all of the tests go into the same namespace: Company.Website.Tests. You don't want your test namespaces to HAVE to be in parrallel with the rest of your code because it just makes refactoring namespaces take twice as long.
A: I prefer Company.Website.Spec and usually have one test project per solution
A: With MVC starting to become a reality in the .net web development world, I would start thinking along those lines. Remember that M, V and C are distinct components, so:
*
*Company.Namespace.Website
*Company.Namespace.Website.Core
*Company.Namspance.Website.Core.Tests
*Company.Namespace.Website.Model
*Company.Namespace.Website.Model.Tests
Website is your lightweight view.
Core contains controllers, helpers, the view interfaces, etc. Core.Tests are your tests for said Core.
Model is for your data model. The cool thing here is that your model tests can automate your database specific tests.
This may be overkill for some people, but I find that it allows me to separate concerns fairly easily.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Which thread should I process the RxTx SerialEvent.DATA_AVAILABLE event? I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial.
My application has several threads and one of my biggest problems is that out of nowhere, I seem to be getting one to two extra bytes on my stream. I can't figure out where they come from or why. This problem seems to occur a lot more frequently when I write to the RxTx stream using another thread.
So I was wonder if I should process the read on the current RxTx thread or should I process the read on another thread when I get the DATA_AVAILABLE event.
I'm hoping someone might have good or bad reasons for doing it one way or the other.
A: This is just a guess, but it may give you a clue.
Is it possible that the send and receive shares a buffer, or that when you send, the bytes are also received on the input somehow - I have seen this before on some embedded systems.
You may find the best thing to do is initially keep both send and receive on the same thread. Another thing may be to make sure the output drains before trying a read.
Hopefully this may give you some clue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Get IFile from IWorkspaceRoot and location String This is an Eclipse question, and you can assume the Java package for all these Eclipse classes is org.eclipse.core.resources.
I want to get an IFile corresponding to a location String I have:
"platform:/resource/Tracbility_All_Supported_lib/processes/gastuff/globalht/GlobalHTInterface.wsdl"
I have the enclosing IWorkspace and IWorkspaceRoot. If I had the IPath corresponding to the location above, I could simply call IWorkspaceRoot.getFileForLocation(IPath).
How do I get the corresponding IPath from the location String? Or is there some other way to get the corresponding IFile?
A: String platformLocationString = portTypeContainer
.getLocation();
String locationString = platformLocationString
.substring("platform:/resource/".length());
IWorkspace workspace = ResourcesPlugin.getWorkspace();
IWorkspaceRoot workspaceRoot = workspace.getRoot();
IFile wSDLFile = (IFile) workspaceRoot
.findMember(locationString);
A: Since IWorkspaceRoot is an IContainer, can't you just use workspaceRoot.findMember(String name) and cast the resulting IResource to IFile?
A: org.eclipse.core.runtime.Path implements IPath.
IPath p = new Path(locationString);
IWorkspaceRoot.getFileForLocation(p);
This would have worked had the location string not been a URL of type "platform:"
For this particular case, notes in org.eclipse.core.runtime.Platform javadoc indicate that the "correct" solution is something like
fileUrl = FileLocator.toFileURL(new URL(locationString));
IWorkspaceRoot.getFileForLocation(fileUrl.getPath());
@[Paul Reiners] your solution apparently assumes that the workspace root is going to be in the "resources" folder
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Excel 2007 pivot tables - how to use calculated fields when connecting to a data cube? Can you use calculated fields in Excel 2007 pivot tables when the data source is an SSAS data cube?
I am connecting to a SQL Server 2005 data cube with Excel 2007 and viewing the data though a pivot table. I want to create calculated fields in my pivot table, but the "Formulas" button is grayed out.
How do I do create calculated fields (if it is possible when connecting to a data cube) ?
A: Looks like you can't. This is the answer I got from MSDN:
Not possible. All of the aggregations are handled by Analysis Services. Since XL is not in charge of the aggregations it has no ability to do custom calculations.HTH... Jim Thomlinson
A: You can do this but it comes at a cost. You have to disconnect your pivot table from the SSAS cube and then you can use Excel as you normally would. The best first step is to get the pivot table exactly how you want it to look. From the pivot table ribbon bar menu, click the options sub-menu. Click the OLAP tools button and then click 'convert to formulas.'
You'll get a 1-way warning message. (good time to click save) Once you've disconnected you'll lose all of the pretty formatting and drag and drop stuff but you can always get that back with manual formatting. But what you gain now is Excel features. Now add the formulas where you want. The key of course is to make sure that you have the elements necessary for the new calculation.
Nice part about it is you can save and reopen later. And you can click refresh to get the data updated from the SSAS server. HTH
A: additionally, you can use 'Office Web Components' instead of Excel. OWC supports client-side formulas against OLAP cubes.. I can't believe that they removed that feature from native Excel.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to accept REF cursor in JAVA without importing Oracle Package I am writting JAVA programme using JDBC for database conntectivity , I am calling one stored procedure in that which is returning ORACLE REF CURSOR , IS there any way I can handle that without importing ORACLE PACKAGES ?
A: I think I tried to do this a while ago and kind of gave up (I guess you could figure out what int value the OracleTypes.REF_CURSOR is and then use that int value, but that's a hack). If you got the patience you could define a record (or object) type and define the the cursor as a cursor with type since that can be cast using table to a value that is selectable like regular tables, ie
select * from table( sp_returning( ? ) )
I did a quick google on ref cursor and jdbc and it looks like it might be an oracle extension which would explain why there is no standard way to access the data.
A: Doing
select * from table( sp_returning( ? ) )
is slower than returning a ref cursor.
I can use a ref cursor in combination with C#, why can't you do it with Java? I'm sure there are plenty examples.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I speed up data retrieval from .NET AD within ColdFusion? How can I optimize the following code, which currently takes over 2 minutes to retrieve and loop through 800+ records from a pool of over 100K records, returning 6 fields per record (adds approximately 20 seconds per additional field):
<cfset dllPath="C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\System.DirectoryServices.dll" />
<cfset LDAPPath="LDAP://" & arguments.searchPath />
<cfset theLookUp=CreateObject(".NET","System.DirectoryServices.DirectoryEntry", dllPath).init(LDAPPath) />
<cfset theSearch=CreateObject(".NET","System.DirectoryServices.DirectorySearcher", dllPath).init(theLookUp) />
<cfset theSearch.Set_Filter(arguments.theFilter) />
<cfset theObject = theSearch.FindAll() />
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfloop from="0" to="#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#" index="item">
<cftry>
<cfset theQuery[col][theQuery.recordCount]=ListAppend(theQuery[col][theQuery.recordCount],theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Item(item),"|") />
<cfcatch type="any">
</cfcatch>
</cftry>
</cfloop>
</cfloop>
</cfloop>
A: It's been a long time since I touched CF, but I can give some hints in pseudo-code. For one thing, this expression is extremely inefficent:
#theObject.Get_Item(row).Get_Properties().Get_Item(col).Get_Count()-1#
Take the first part for example, Get_Item(row) - your code causes CF to go retrieve the row and its properties for each iteration of the #columnList# loop; and to top it all, you're doing that TWICE per iteration of columnlist (once for loop and again for the inner cfset). If you think about it, it only needs to retrieve the row for each iteration of the outer loop (from #sfstart# to #cfend). So, in pseudo-code do this:
for each row between start and end
cfset props = #theobject.get_item(row).get_properties()#
for each col in #columnlist#
cfset currentcol = #props.getitem(col)#
cfset count = #currentcol.getcount() - 1#
foreach item from 0 to #count#
cfset #currentcol.getItem(item)# etc...
Make sense? Every time you enter a loop, cache objects that will be reused in that scope (or child scopes) in a variable. That means you are only grabbing the column object once per iteration of the column loop. All variables defined in outer scopes are available in the inner scopes, as you can see in what I've done above. I know its tempting to cut and paste from previous lines, but don't. It only hurts you in the end.
hope this helps,
Oisin
A: How large is the list of items for the inner loop?
Switching to an array might be faster if there is a significantly large number of items.
I have implemented this alongside x0n's suggestions...
<cfset dllPath="C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\System.DirectoryServices.dll" />
<cfset LDAPPath="LDAP://" & arguments.searchPath />
<cfset theLookUp=CreateObject(".NET","System.DirectoryServices.DirectoryEntry", dllPath).init(LDAPPath) />
<cfset theSearch=CreateObject(".NET","System.DirectoryServices.DirectorySearcher", dllPath).init(theLookUp) />
<cfset theSearch.Set_Filter(arguments.theFilter) />
<cfset theObject = theSearch.FindAll() />
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset Props = theObject.get_item(row).get_properties() />
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfset CurrentCol = Props.getItem(col) />
<cfset ItemArray = ArrayNew(1)/>
<cfloop from="0" to="#CurrentCol.getcount() - 1#" index="item">
<cftry>
<cfset ArrayAppend( ItemArray , CurrentCol.Get_Item(item) )/>
<cfcatch type="any">
</cfcatch>
</cftry>
</cfloop>
<cfset theQuery[col][theQuery.recordCount] = ArrayToList( ItemArray , '|' )/>
</cfloop>
</cfloop>
A: Additionally, using a cftry block in each loop is likely slowing this down quite a bit. Unless you are expecting individual rows to fail (and you need to continue from that point), I would suggest a single try/catch block for the entire process. Try/catch is an expensive operation.
A: I would think that you'd want to stop doing so many evaluations inside of your loops and instead use variables to hold counts, pointers to the col object and to hold your pipe-delim string until you're ready to commit to the query object. If I've done the refactoring correctly, you should notice an improvement if you use the code below:
<cfloop index="row" from="#startRow#" to="#endRow#">
<cfset QueryAddRow(theQuery) />
<cfloop list="#columnList#" index="col">
<cfset PipedVals = "">
<cfset theItem = theObject.Get_Item(row).Get_Properties().Get_Item(col)>
<cfset ColCount = theItem.Get_Count()-1>
<cfloop from="0" to="#ColCount#" index="item">
<cftry>
<cfset PipedVals = ListAppend(PipedVals,theItem.Get_Item(item),"|")>
<cfcatch type="any"></cfcatch>
</cftry>
</cfloop>
<cfset QuerySetCell(theQuery,col) = PipedVals>
</cfloop>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is the single best free Eclipse plugin for a Java developer Some Eclipse plugins are mandated by your environment. The appropriate source code management plugin, for example - and I'm not interested in those.
Some provide useful enhancements, but in a specific niche. I'm not interested in those.
Some are great, but cost money. I'm not interested in those.
Some were really useful on older versions of Eclipse, but are now part of the core build of the latest Eclipse version (3.4 as I write this). I'm not interested in those.
I want advice on which plugins every Java SE developer should be installing, one per answer please.
A: Checkstyle. Its very quick.
FindBugs is wonderful but quite slow
A: My answer to this is clearly eclim. It exports Eclipse functionality to Vim, enabling me to use several awesome features of Eclipse, like auto-completion, autobuild and error-markup in the source file (using locations in Vim), auto-formatting, automatic imports, JavaDoc search, Source code Search... blah, I could go on forever. The most important thing is: I don't have to use the suck that is the Eclipse Java Editor (to me, editor quality is always subjective, of course).
Check out the site if you're into Vim, but forced/tempted to use Eclipse for one reason or another.
A: Resource Bundle Plugin
A: Findbugs saved me doing something silly twice today.
http://findbugs.sourceforge.net/
Eclipse update site is: http://findbugs.cs.umd.edu/eclipse/
A: I'm particularly fond of the bytecode outliner plugin, although it won't suit all tastes since looking at Java bytecode isn't for everyone. Sometimes it's really useful to see the underlying bytecode for your Java class.
Update site: http://download.forge.objectweb.org/eclipse-update/
Description: http://asm.objectweb.org/eclipse/index.html
A: Google just recently released CodePro, great plugin.
A: The Eclipse TPTP can be incredibly useful for finding the slow spots in code and for anything else that would requiring debugging, profiling, or benchmarking. The only flaw is that it doesn't work on the mac :'(.
A: I do really like the Andrei Loskutov's plugins:
http://andrei.gmxhome.de/eclipse.html
A: JAutodoc is extremely helpful if you are required to proved javadoc in your source and need to add it to a large class or many classes at the same time. It uses the name of your variables to create the javadoc, so it is not perfect and is limited by how meaningful your parameter names are. Even if you have to go back and fix it up a bit, it saves you a lot of time.
http://jautodoc.sourceforge.net/update/
A: spring IDE
Update URL: http://springide.org/updatesite
A: If you use Hibernate then Hibernate Tools is a must. I really like the ability to write my HQL or JPQL and view the generated SQL real time!
If you're not using Hibernate I'm guessing your using a database in some form or another. Therefore, I would recommend the Data Tools Platform. In fact, you would be crazy to develop Java apps without using all the plugins provided by the Eclipse Ganymede Release. It's a great development platform without the headache of getting all the must have plugins synced up and working together.
A: I found sourceHelper plugin very useful when developing and debugging code.
The description of the plugin on the website says, "The “Source Helper” plugin is an Eclipse plugin that takes a very useful feature that exists in Intellij IDEA and puts it into Eclipse. In short, the feature shows the code of an out-of-visible-range starting bracket by floating a window that shows the code you cannot see. This helps immensely when trying to identify what closing bracket belongs to what part of the code."
A: Chronon the time travelling debugger is awesome. I hope to see this ported to other languages in the future.
http://www.chrononsystems.com/
A: Seems like you can't really answer this question without having a focus for your development in Eclipse. I guess everyone needs a build and dependency system, so maybe Maven tools will win?
*
*http://m2eclipse.codehaus.org/ <--
nice for managing your projects
pom.xml
*http://code.google.com/p/q4e/ <---
nice for managing your Maven
repositories from Eclipse
A: Eclipse Metrics Feature (update site). The blurb:
This Eclipse plugin calculates various metrics for your code during build cycles and warns you, via the Problems view, of ‘range violations’ for each metric. This allows you to stay continuously aware of the health of your code base. You may also export the metrics to HTML for public display or to CSV format for further analysis.
*
*Recalculation of metrics during every build
*Export of metrics to CSV or HTML
*Visual ‘dashboard’ with HTML export
*Supported metrics are:
*
*McCabe’s Cyclomatic Complexity
*Efferent Couplings
*Lack of Cohesion in Methods
*
*Lines Of Code in Method
*Number Of Fields
*Number Of Levels
*Number Of Parameters
*Number Of Statements
*Weighted Methods Per Class
(actually, I love FindBugs more, but this project is second.)
A: I couple of my favorites are Mylyn and CheckStyle
A: EditBox
http://editbox.sourceforge.net/
A: Answering my own question with my current favourite, Jadclipse, which works with jad to disassemble class files from third party libraries.
http://jadclipse.sourceforge.net/
A: HyperAssist.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=159157
In my view, it's the single factor that puts Eclipse ahead of every other IDE in terms of actual productivity.
A: FileSync has turned out to be really convenient when working with web applications, because it allows me to smoothly get incremental deployment on resource-type files, such as javascripts, JSPs, CSS files, and so on. It's simple to configure and just powerful enough to get the job done.
A: JBoss Tools for quick and easy web application development.
A: I'd recommed Spring Source Tool Suite which is for enterprise Java development with Spring framework.
A: If you need to get more insight in your code coverage EclEmma is pretty straightforward and useful
http://www.eclemma.org
A: Subclipse
SVN for eclipse
Update URL: http://subclipse.tigris.org/update_1.4.x
A: MouseFeed Eclipse Plugin
I am using this one, which is very helpful for programmers who don't use key shortcut because they don't know about them.
MouseFeed helps to form a habit of
using keyboard shortcuts. When the
user clicks on a button or on a menu
item, the plugin shows a popup
reminding about the key shortcut.
A: I'm going to cheat and say the maven plugin. Then everything else can hang off that.
Plus, maven-eclipse-plugin takes care of the biggest single problem I have with eclipse: Setting your classpath.
A: I've just discovered Byecycle. This dependency viewer lets you see how pretty (or otherwise) your design is, and highlights any circular dependencies between classes, allowing you to take appropriate action.
A: There's some great stuff mentioned by others, but I'm going to put SQLExplorer out there, too. Maybe not as generally useful as Maven or FindBugs, but it's great for pulling any JDBC data source into the IDE so you can test it and explore the DB structure. It's also available as a standalone RCP app.
A: Visual Editor for quick GUI development.
A: java.decompiler.free.fr/?q=jdeclipse
Java Decomipler Plugin for eclipse.
I thought, it is the most useful plugin.
A: Fast Code eclipse plugin can be of help a little.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
}
|
Q: How do I efficiently search an array to fill in form fields? I am looking for an efficient way to pull the data I want out of an array called $submission_info so I can easily auto-fill my form fields. The array size is about 120.
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
Output of Print_r($submission_info[1]):
Array (
[field_id] => 2399
[form_id] => 4
[field_name] => loanOfficer
[field_test_value] => ABCDEFGHIJKLMNOPQRSTUVWXYZ
[field_size] => medium
[field_type] => other
[data_type] => string
[field_title] => LoanOfficer
[col_name] => loanOfficer
[list_order] => 2
[admin_display] => yes
[is_sortable] => yes
[include_on_redirect] => yes
[option_orientation] => vertical
[file_upload_dir] =>
[file_upload_url] =>
[file_upload_max_size] => 1000000
[file_upload_types] =>
[content] => John Doe
)
I want to find the field name and extract the content. In this case, the field name is loanOfficer and the content is John Doe.
A: You're probably best off going through each entry and creating a new associative array out of it.
foreach($submission_info as $elem) {
$newarray[$elem["field_name"]] = $elem["content"];
}
Then you can just find the form fields by getting the value from $newarray[<field you're filling in>]. Otherwise, you're going to have to search $submission_info each time for the correct field.
A: Not sure if this is the optimal solution:
foreach($submission_info as $info){
if($info['field_name'] == 'loanOfficer'){ //check the field name
$content = $info['content']; //store the desired value
continue; //this will stop the loop after the desired item is found
}
}
Next time:
Questions are more helpful to you and others if you generalize them such that they cover some overarching topic that you and maybe others don't understand. Seems like you could use an array refresher course...
A: I'm assuming that php has an associative array (commonly called dictionary or hashtable). The most efficient routine would be to run over the array once and put the fields into a dictionary keyed on the field name.
Then instead of having to search through the original array when you want to find a specific field (an O(n)) operation. You just used the dictionary to retrieve it by the name of the field in an O(1) (or constant) operation. Of course the first pass over the array to populate the dictionary would be O(n) but that's a one time cost rather than paying that same penalty for every lookup.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: tagName is null or not an object -- error msg in IE7 using latest version of jQuery (1.2.6) has anyone else seen this error message. a quick check with google doesn't show me much.
A: You are probably trying to do something like this....
alert($("#myElement").tagName);
You should do this...
alert($("#myElement")[0].tagName);
A: Ensure that you're running your script AFTER the page has rendered. If your script is at the top of the page and runs right away, and your DIV is below that script the DIV doesn't exist yet.
A: If you're only seeing it in IE, it could be a bug with the DivX web plugin:
http://drupal.org/node/1038058
Issue is confirmed on the DivX site, which says "Fixed in the next release" (April 2011).
http://labs.divx.com/node/16824
Essentially, the bug is that DivX replaces the el.appendChild() function with an intercept that tries to lowercase the tagname, but doesn't bother checking to see if it is a text node. This will break jQuery if you try to do something like $('body').append('hi!'). Additionally, it fails to return the element, which breaks SWFObject, for one.
The workaround is to make sure that you never call append() with a a text node, or with multiple top level elements. If you are appending a series of items:
append($('<div>hi</div><div>there</div>'))
make sure you wrap them in a top level div to avoid the implied text node:
append($('<div><div>hi</div><div>there</div></div>'))
Here's a full workaround for appendChild being broken, the same approach can be used for insertBefore and replaceChild which share the bug. The Divx JS is inserted some time after the page is loaded, so you may want to poll for this.
window.fixDivxPlayerBug = function () {
var b = document.getElementsByTagName('body')[0];
if ( window['ReplaceVideoElements'] && ! b.appendChildDivx ){
b.appendChildDivx = b.appendChild;
b.appendChild = function (el) {
if( el.tagName == null) {
var wrap = document.createElement('div');
wrap.appendChild(el);
b.appendChildDivx(wrap);
}
else {
b.appendChildDivx(el);
}
return el;
};
}
};
A: sorry this was a sort of half-assed question, on my part. i'm going through my .js files and eliminating entire blocks of code now. i've isolated it to one particular .js file.
thanks all the same for reading through this question and replying.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Dockable Form How do you create a "dockable" form, similar to the windows in Visual Studio?
A: I've used Weifen Luo's "DockPanel Suite" to good effect. It's an open source library that mimics Visual Studio's docking system very well, including nested docking, floating windows, tabbed windows, etc. You can download his source and see his approach there, if you'd prefer your own, simpler solution.
Sourceforge project here: http://sourceforge.net/projects/dockpanelsuite
A: We are using Weifen Luo's "DockPanel Suite" for our project and quite happy with it.
A: One of the best ones I've seen and used is SandDock from Divelements, they have both a WinForms and a WPF version.
A: I've used CodeJock's DockingPane ActiveX control to create docking panes before.
A: You'll probably want to consume someone else's component for this purpose rather than trying to roll your own, and there's no native WinForms way to do this.
There is a free library on CodeProject for this purpose, but I haven't tried it.
http://www.codeproject.com/KB/toolbars/DockContainer.aspx
A: Any custom control (composite, not inherited) in winforms can be docked, because it's really just a panel underneath. Combine this with a tablelayout panel for the high-level layout of your form and you have all the pieces. But you still have to implement all the glue and supporting UI yourself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What tool works for real-time tweaking of CSS in IE6, similar to what Firebug does for Firefox? All front-end developers know the pain of coding for Firefox, then viewing our then mangled pages in IE6.
IE6 is still widely used (it is, however disappearing slowly but surely... in a year and a half from the writing of this, it will be irrelevant as usage will be less than 1%)
We usually used IE conditional comments to create separate CSS files for IE to render correctly. It would be great if there was a tool like Firebug for IE6. Firebug "lite" exists but it seems to be very beta and the new IE8 has an excellent built in developer tool that in many ways is better than firebug) What's the best tool for IE6?
A: As far as I know Debugbar is currently the most sophisticated debugging tool for IE. It's definitely better than IE developer, but it's still not quite as slick as firebug.
A: I used to use IE Developer Toolbar as a sort of Firebug alternative. While I don't think it allows you to enter normal CSS, you can adjust the CSS as a list of name/value pairs (adding, editing, removing, etc).
A: Internet Explorer Developer Toolbar seems to be the best option.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Super Robust as chrome c++ and portable - tips - help - comments We are producing a portable code (win+macOs) and we are looking at how to make the code more rubust as it crashes every so often... (overflows or bad initializations usually) :-(
I was reading that Google Chrome uses a process for every tab so if something goes wrong then the program does not crash compleatelly, only that tab. I think that is quite neat, so i might give it a go!
So i was wondering if someone has some tips, help, reading list, comment, or something that can help me build more rubust c++ code (portable is always better).
In the same topic i was also wondering if there is a portable library for processes (like boost)?
Well many Thanks.
A: The Chrome answer is more about failure mitigation and not about code quality. Doing what Chrome is doing is admitting defeat.
*
*Better QA that is more than just programmer testing their own work.
*Unit testing
*Regression testing
*Read up on best practices that other
companies use.
To be blunt, if your software is crashing often due to overflows and bad initializations, then you have a very basic programming quality problem that isn't going to be easily fixed. That sounds a hash and mean, that isn't my intent. My point is that the problem with the bad code has to be your primary concern (which I'm sure it is). Things like Chrome or liberal use to exception handling to catch program flaw are only distracting you from the real problem.
A: I've developed on numerous multi-platform C++ apps (the largest being 1.5M lines of code and running on 7 platforms -- AIX, HP-UX PA-RISC, HP-UX Itanium, Solaris, Linux, Windows, OS X). You actually have two entirely different issues in your post.
*
*Instability. Your code is not stable. Fix it.
*
*Use unit tests to find logic problems before they kill you.
*Use debuggers to find out what's causing the crashes if it's not obvious.
*Use boost and similar libraries. In particular, the pointer types will help you avoid memory leaks.
*Cross-platform coding.
*
*Again, use libraries that are designed for this when possible. Particularly for any GUI bits.
*Use standards (e.g. ANSI vs gcc/MSVC, POSIX threads vs Unix-specific thread models, etc) as much as possible, even if it requires a bit more work. Minimizing your platform specific code means less overall work, and fewer APIs to learn.
*Isolate, isolate, isolate. Avoid in-line #ifdefs for different platforms as much as possible. Instead, stick platform specific code into its own header/source/class and use your build system and #includes to get the right code. This helps keep the code clean and readable.
*Use the C99 integer types if at all possible instead of "long", "int", "short", etc -- otherwise it will bite you when you move from a 32-bit platform to a 64-bit one and longs suddenly change from 4 bytes to 8 bytes. And if that's ever written to the network/disk/etc then you'll run into incompatibility between platforms.
Personally, I'd stabilize the code first (without adding any more features) and then deal with the cross-platform issues, but that's up to you. Note that Visual Studio has an excellent debugger (the code base mentioned above was ported to Windows just for that reason).
A: You don't mention what the target project is; having a process per-tab does not necessarily mean more "robust" code at all. You should aim to write solid code with tests regardless of portability - just read about writing good C++ code :)
As for the portability section, make sure you are testing on both platforms from day one and ensure that no new code is written until platform-specific problems are solved.
A: You really, really don't want to do what Chrome is doing, it requires a process manager which is probably WAY overkill for what you want.
You should investigate using smart pointers from Boost or another tool that will provide reference counting or garbage collection for C++.
Alternatively, if you are frequently crashing you might want to perhaps consider writing non-performance critical parts of your application in a scripting language that has C++ bindings.
A: Scott Meyers' Effective C++ and More Effective C++ are very good, and fun to read.
Steve McConnell's Code Complete is a favorite of many, including Jeff Atwood.
The Boost libraries are probably an excellent choice. One project where I work uses them. I've only used WIN32 threading myself.
A: I agree with Torlack.
Bad initialization or overflows are signs of poor quality code.
Google did it that way because sometimes, there was no way to control the code that was executed in a page (because of faulty plugins, etc.). So if you're using low quality plug ins (it happens), perhaps the Google solution will be good for you.
But a program without plugins that crashes often is just badly written, or very very complex, or very old (and missing a lot of maintenance time). You must stop the development, and investigate each and every crash. On Windows, compile the modules with PDBs (program databases), and each time it crashes, attach a debugger to it.
You must add internal tests, too. Avoid the pattern:
doSomethingBad(T * t)
{
if(t == NULL) return ;
// do the processing.
}
This is very bad design because the error is there, and you just avoid it, this time. But the next function without this guard will crash. Better to crash sooner to be nearer from the error.
Instead, on Windows (there must be a similar API on MacOS)
doSomethingBad(T * t)
{
if(t == NULL) ::DebugBreak() ; // it will call the debugger
// do the processing.
}
(don't use this code directly... Put it in a define to avoid delivering it to a client...)
You can choose the error API that suits you (exceptions, DebugBreak, assert, etc.), but use it to stop the moment the code knows something's wrong.
Avoid the C API whenever possible. Use C++ idioms (RAII, etc.) and libraries.
Etc..
P.S.: If you use exceptions (which is a good choice), don't hide them inside a catch. You'll only make your problem worse because the error is there, but the program will try to continue and will probably crash sometimes after, and corrupt anything it touches in the mean time.
A: You can always add exception handling to your program to catch these kinds of faults and ignore them (though the details are platform specific) ... but that is very much a two edged sword. Instead consider having the program catch the exceptions and create dump files for analysis.
If your program has behaved in an unexpected way, what do you know about your internal state? Maybe the routine/thread that crashed has corrupted some key data structure? Maybe if you catch the error and try to continue the user will save whatever they are working on and commit the corruption to disk?
A: Beside writing more stable code, here's one idea that answers your question.
Whether you are using processes or threads. You can write a small / simple watchdog program. Then your other programs register with that watchdog. If any process dies, or a thread dies, it can be restarted by the watchdog. Of course you'll want to put in some test to make sure you don't keep restarting the same buggy thread. ie: restart it 5 times, then after the 5th, shutdown the whole program and log to file / syslog.
A: Build your app with debug symbols, then either add an exception handler or configure Dr Watson to generate crash dumps (run drwtsn32.exe /i to install it as the debugger, without the /i to pop the config dialog). When your app crashes, you can inspect where it went wrong in windbg or visual studio by seeing a callstack and variables.
google for symbol server for more info.
Obviously you can use exception handling to make it more robust and use smart pointers, but fixing the bugs is best.
A: Build it with the idea that the only way to quit is for the program to crash and that it can crash at any time. When you build it that way, crashing will never/almost never lose any data. I read an article about it a year or two ago. Sadly, I don't have a link to it.
Combine that with some sort of crash dump and have it email you it so you can fix the problem.
A: I would recommend that you compile up a linux version and run it under Valgrind.
Valgrind will track memory leaks, uninitialized memory reads and many other code problems. I highly recommend it.
A: After over 15 years of Windows development I recently wrote my first cross-platform C++ app (Windows/Linux). Here's how:
*
*STL
*Boost. In particular the filesystem and thread libraries.
*A browser based UI. The app 'does' HTTP, with the UI consisting of XHTML/CSS/JavaScript (Ajax style). These resources are embedded in the server code and served to the browser when required.
*Copious unit testing. Not quite TDD, but close. This actually changed the way I develop.
I used NetBeans C++ for the Linux build and had a full Linux port in no time at all.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Distributed hierarchical clustering Are there any algorithms that can help with hierarchical clustering?
Google's map-reduce has only an example of k-clustering. In case of hierarchical clustering, I'm not sure how it's possible to divide the work between nodes.
Other resource that I found is: http://issues.apache.org/jira/browse/MAHOUT-19
But it's not apparent, which algorithms are used.
A: Clark Olson reviews several distributed algorithms for hierarchical clustering:
C. F. Olson. "Parallel Algorithms for
Hierarchical Clustering." Parallel
Computing, 21:1313-1325, 1995, doi:10.1016/0167-8191(95)00017-I.
Parunak et al. describe an algorithm inspired by how ants sort their nests:
H. Van Dyke Parunak, Richard Rohwer,
Theodore C. Belding,and Sven
Brueckner: "Dynamic Decentralized
Any-Time Hierarchical Clustering." In
Proc. 4th International Workshop on Engineering Self-Organising Systems
(ESOA), 2006, doi:10.1007/978-3-540-69868-5
A: Check out this very readable if a bit dated review by Olson (1995). Most papers since then require a fee to access. :-)
If you use R, I recommend trying pvclust which achieves parallelism using snow, another R module.
A: First, you have to decide if you're going to build your hierarchy bottom-up or top-down.
Bottom-up is called Hierarchical agglomerative clustering. Here's a simple, well-documented algorithm: http://nlp.stanford.edu/IR-book/html/htmledition/hierarchical-agglomerative-clustering-1.html.
Distributing a bottom-up algorithm is tricky because each distributed process needs the entire dataset to make choices about appropriate clusters. It also needs a list of clusters at its current level so it doesn't add a data point to more than one cluster at the same level.
Top-down hierarchy construction is called Divisive clustering. K-means is one option to decide how to split your hierarchy's nodes. This paper looks at K-means and Principal Direction Divisive Partitioning (PDDP) for node splitting: http://scgroup.hpclab.ceid.upatras.gr/faculty/stratis/Papers/tm07book.pdf. In the end, you just need to split each parent node into relatively well-balanced child nodes.
A top-down approach is easier to distribute. After your first node split, each node created can be shipped to a distributed process to be split again and so on... Each distributed process needs only to be aware of the subset of the dataset it is splitting. Only the parent process is aware of the full dataset.
In addition, each split could be performed in parallel. Two examples for k-means:
*
*http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.1882&rep=rep1&type=pdf
*http://www.ece.northwestern.edu/~wkliao/Kmeans/index.html.
A: You can see also Finding and evaluating community structure in networks by Newman and Girvan, where they propose an aproach for evaluating communities in networks(and set of algoritms based on this approach) and measure of network division into communities quality (graph modularity).
A: You could look at some of the work being done with Self-Organizing maps (Kohonen's neural network method)... the guys at Vienna University of Technology have done some work on distributed calculation of their growing hierarchical map algorithm.
This is a little on the edge of your clustering question, so it may not help, but I can't think of anything closer ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: ASP.NET 1.1 Page_ClientValidate Debugging I have an ASP.NET 1.1 application, and on my local machine the submit button on my page works fine, but when I deploy it to our development application server, I click on Submit and nothing happens.. I'm assuming that the Page_Validate() function is failing and disabling the POSTBACK, but how do I debug this and determine what is failing? It sounds like some config problem since it works great on my local machine but not on the remote server...
A: Here's what happened... in ASP.NET 1.1, there was an error in the WebUIValidation.js file (supplied by microsoft and created when you run aspnet_regiis.exe), in function ValidatorCommonOnSubmit. It seems the method was missing a return statement!! If you modify this file and insert "return event.returnValue" at the end, your validations are OK. Took me a while to find this one, but once I did I googled it and it was a well known bug.
A: I remember back in the day with 1.1 Visual Studio used to destroy my event handler hookups occasionally.
If you are using Visual Studio 2003, make certain that the "generated" code still contains the event handler wireup for your control.
A: See if the aspnet_client directory of scripts is correctly installed on the server. You should have a js like this one. Otherwise execute aspnet_regiis.exe -c (see the docs)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: XML node name clean up code I am trying to create an XML file based on data fields from a table, and I want to have the nodes named based on the value in a field from the table. The problem is that sometimes values entered in that column contain spaces and other characters not allowed in Node names.
Does anyone have any code that will cleanup a passed in string and repalce invalid characters with replacement text so that it can be reversed on the other end and get the original value back?
I am using .net (vb.net but I can read/convert c#)
A: It might be easier if you stored the original value if it were illegal as a node name in an attribute. Then you wouldn't worry about having some sort of complex to/from translation.
A: As a matter of fact I would go so far as to say that unless you have complete control over the data, then no translation process would ever work. So I second storing the original data either as an attribute or a child node.
A: You didn't say in your original post what language so here's a regex pattern that should get you started. This is QUICK so you'll need to test it.
([^A-Za-z0-9])|(\s)|(\t+)|(\c+)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I programmatically wire up ToolStripButton events in C#? I'm programmatically adding ToolStripButton items to a context menu.
That part is easy.
this.tsmiDelete.DropDownItems.Add("The text on the item.");
However, I also need to wire up the events so that when the user clicks the item something actually happens!
How do I do this? The method that handles the click also needs to receive some sort of id or object that relates to the particular ToolStripButton that the user clicked.
A: Couldn't you just subscribe to the Click event? Something like this:
ToolStripButton btn = new ToolStripButton("The text on the item.");
this.tsmiDelete.DropDownItems.Add(btn);
btn.Click += new EventHandler(OnBtnClicked);
And OnBtnClicked would be declared like this:
private void OnBtnClicked(object sender, EventArgs e)
{
ToolStripButton btn = sender as ToolStripButton;
// handle the button click
}
The sender should be the ToolStripButton, so you can cast it and do whatever you need to do with it.
A: Thanks for your help with that Andy. My only problem now is that the AutoSize is not working on the ToolStripButtons that I'm adding! They're all too narrow.
It's rather odd because it was working earlier.
Update: There's definitely something wrong with AutoSize for programmatically created ToolStripButtons. However, I found a solution:
*
*Create the ToolStripButton.
*Create a label control and set the font properties to match your button.
*Set the text of the label to match your button.
*Set the label to AutoSize.
*Read the width of the label and use that to set the width of the ToolStripButton.
It's hacky, but it works.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I create a self-signed certificate for code signing on Windows? How do I create a self-signed certificate for code signing using tools from the Windows SDK?
A: Updated Answer
If you are using the following Windows versions or later: Windows Server 2012, Windows Server 2012 R2, or Windows 8.1 then MakeCert is now deprecated, and Microsoft recommends using the PowerShell Cmdlet New-SelfSignedCertificate.
If you're using an older version such as Windows 7, you'll need to stick with MakeCert or another solution. Some people suggest the Public Key Infrastructure Powershell (PSPKI) Module.
Original Answer
While you can create a self-signed code-signing certificate (SPC - Software Publisher Certificate) in one go, I prefer to do the following:
Creating a self-signed certificate authority (CA)
makecert -r -pe -n "CN=My CA" -ss CA -sr CurrentUser ^
-a sha256 -cy authority -sky signature -sv MyCA.pvk MyCA.cer
(^ = allow batch command-line to wrap line)
This creates a self-signed (-r) certificate, with an exportable private key (-pe). It's named "My CA", and should be put in the CA store for the current user. We're using the SHA-256 algorithm. The key is meant for signing (-sky).
The private key should be stored in the MyCA.pvk file, and the certificate in the MyCA.cer file.
Importing the CA certificate
Because there's no point in having a CA certificate if you don't trust it, you'll need to import it into the Windows certificate store. You can use the Certificates MMC snapin, but from the command line:
certutil -user -addstore Root MyCA.cer
Creating a code-signing certificate (SPC)
makecert -pe -n "CN=My SPC" -a sha256 -cy end ^
-sky signature ^
-ic MyCA.cer -iv MyCA.pvk ^
-sv MySPC.pvk MySPC.cer
It is pretty much the same as above, but we're providing an issuer key and certificate (the -ic and -iv switches).
We'll also want to convert the certificate and key into a PFX file:
pvk2pfx -pvk MySPC.pvk -spc MySPC.cer -pfx MySPC.pfx
If you are using a password please use the below
pvk2pfx -pvk MySPC.pvk -spc MySPC.cer -pfx MySPC.pfx -po fess
If you want to protect the PFX file, add the -po switch, otherwise PVK2PFX creates a PFX file with no passphrase.
Using the certificate for signing code
signtool sign /v /f MySPC.pfx ^
/t http://timestamp.url MyExecutable.exe
(See why timestamps may matter)
If you import the PFX file into the certificate store (you can use PVKIMPRT or the MMC snapin), you can sign code as follows:
signtool sign /v /n "Me" /s SPC ^
/t http://timestamp.url MyExecutable.exe
Some possible timestamp URLs for signtool /t are:
*
*http://timestamp.verisign.com/scripts/timstamp.dll
*http://timestamp.globalsign.com/scripts/timstamp.dll
*http://timestamp.comodoca.com/authenticode
*http://timestamp.digicert.com
Full Microsoft documentation
*
*signtool
*makecert
*pvk2pfx
Downloads
For those who are not .NET developers, you will need a copy of the Windows SDK and .NET framework. A current link is available here: [SDK & .NET][5] (which installs makecert in `C:\Program Files\Microsoft SDKs\Windows\v7.1`). Your mileage may vary.
MakeCert is available from the Visual Studio Command Prompt. Visual Studio 2015 does have it, and it can be launched from the Start Menu in Windows 7 under "Developer Command Prompt for VS 2015" or "VS2015 x64 Native Tools Command Prompt" (probably all of them in the same folder).
A: It's fairly easy using the New-SelfSignedCertificate command in Powershell.
Open powershell and run these 3 commands.
*
*Create certificate:
$cert = New-SelfSignedCertificate -DnsName www.yourwebsite.com -Type CodeSigning -CertStoreLocation Cert:\CurrentUser\My
*Set the password for it:
$CertPassword = ConvertTo-SecureString -String "my_passowrd" -Force -AsPlainText
*Export it:
Export-PfxCertificate -Cert "cert:\CurrentUser\My\$($cert.Thumbprint)" -FilePath "d:\selfsigncert.pfx" -Password $CertPassword
Your certificate selfsigncert.pfx will be located @ D:/
Optional step: You would also require to add certificate password to system environment variables. do so by entering below in cmd:
setx CSC_KEY_PASSWORD "my_password"
A: Roger's answer was very helpful.
I had a little trouble using it, though, and kept getting the red "Windows can't verify the publisher of this driver software" error dialog. The key was to install the test root certificate with
certutil -addstore Root Demo_CA.cer
which Roger's answer didn't quite cover.
Here is a batch file that worked for me (with my .inf file, not included).
It shows how to do it all from start to finish, with no GUI tools at all
(except for a few password prompts).
REM Demo of signing a printer driver with a self-signed test certificate.
REM Run as administrator (else devcon won't be able to try installing the driver)
REM Use a single 'x' as the password for all certificates for simplicity.
PATH %PATH%;"c:\Program Files\Microsoft SDKs\Windows\v7.1\Bin";"c:\Program Files\Microsoft SDKs\Windows\v7.0\Bin";c:\WinDDK\7600.16385.1\bin\selfsign;c:\WinDDK\7600.16385.1\Tools\devcon\amd64
makecert -r -pe -n "CN=Demo_CA" -ss CA -sr CurrentUser ^
-a sha256 -cy authority -sky signature ^
-sv Demo_CA.pvk Demo_CA.cer
makecert -pe -n "CN=Demo_SPC" -a sha256 -cy end ^
-sky signature ^
-ic Demo_CA.cer -iv Demo_CA.pvk ^
-sv Demo_SPC.pvk Demo_SPC.cer
pvk2pfx -pvk Demo_SPC.pvk -spc Demo_SPC.cer ^
-pfx Demo_SPC.pfx ^
-po x
inf2cat /drv:driver /os:XP_X86,Vista_X64,Vista_X86,7_X64,7_X86 /v
signtool sign /d "description" /du "www.yoyodyne.com" ^
/f Demo_SPC.pfx ^
/p x ^
/v driver\demoprinter.cat
certutil -addstore Root Demo_CA.cer
rem Needs administrator. If this command works, the driver is properly signed.
devcon install driver\demoprinter.inf LPTENUM\Yoyodyne_IndustriesDemoPrinter_F84F
rem Now uninstall the test driver and certificate.
devcon remove driver\demoprinter.inf LPTENUM\Yoyodyne_IndustriesDemoPrinter_F84F
certutil -delstore Root Demo_CA
A: As of PowerShell 4.0 (Windows 8.1/Server 2012 R2) it is possible to make a certificate in Windows without makecert.exe.
The commands you need are New-SelfSignedCertificate and Export-PfxCertificate.
Instructions are in Creating Self Signed Certificates with PowerShell.
A: As stated in the answer, in order to use a non deprecated way to sign your own script, one should use New-SelfSignedCertificate.
*
*Generate the key:
New-SelfSignedCertificate -DnsName email@yourdomain.com -Type CodeSigning -CertStoreLocation cert:\CurrentUser\My
*Export the certificate without the private key:
Export-Certificate -Cert (Get-ChildItem Cert:\CurrentUser\My -CodeSigningCert)[0] -FilePath code_signing.crt
The [0] will make this work for cases when you have more than one certificate... Obviously make the index match the certificate you want to use... or use a way to filtrate (by thumprint or issuer).
*Import it as Trusted Publisher
Import-Certificate -FilePath .\code_signing.crt -Cert Cert:\CurrentUser\TrustedPublisher
*Import it as a Root certificate authority.
Import-Certificate -FilePath .\code_signing.crt -Cert Cert:\CurrentUser\Root
*Sign the script (assuming here it's named script.ps1, fix the path accordingly).
Set-AuthenticodeSignature .\script.ps1 -Certificate (Get-ChildItem Cert:\CurrentUser\My -CodeSigningCert)
Obviously once you have setup the key, you can simply sign any other scripts with it.
You can get more detailed information and some troubleshooting help in this article.
A: You can generate one in Visual Studio 2019, in the project properties. In the Driver Signing section, the Test Certificate field has a drop-down. Generating a test certificate is one of the options. The certificate will be in a file with the 'cer' extension typically in the same output directory as your executable or driver.
A: This post will only answer the "how to sign an EXE file if you have the crtificate" part:
To sign the exe file, I used MS "signtool.exe". For this you will need to download the bloated MS Windows SDK which has a whooping 1GB. FORTUNATELY, you don't have to install it. Just open the ISO and extract "Windows SDK Signing Tools-x86_en-us.msi". It has a merely 400 KB.
Then I built this tiny script file:
prompt $
echo off
cls
copy "my.exe" "my.bak.exe"
"c:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64\signtool.exe" sign /fd SHA256 /f MyCertificate.pfx /p MyPassword My.exe
pause
Details
__
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "284"
}
|
Q: What is the best choice for .NET inter-process communication? Should I use Named Pipes, or .NET Remoting to communicate with a running process on my machine?
A: WCF is the best choice. It supports a number of different transport mechanisms (including Named Pipes) and can be completely configuration driven. I would highly recommend that you take a look at WCF.
Here is a blog that does a WCF vs Remoting performance comparison.
A quote from the blog:
The WCF and .NET Remoting are really comparable in performance. The differences are so small (measuring client latency) that it does not matter which one is a bit faster. WCF though has much better server throughput than .NET Remoting. If I would start completely new project I would chose the WCF. Anyway the WCF does much more than Remoting and for all those features I love it.
MSDN Section for WCF
A: If you mean inter-process communication, I used .NET Remoting without any problem so far. If the two processes are on the same machine, the communication is quite fast.
Named Pipes are definitely more efficient, but they require the design of at least a basic application protocol, which might not be feasible. Remoting allows you to invoke remote methods with ease .
A: If you are using the .NET Framework 3.0 or above, I would use WCF. Using WCF, you can use different bindings depeneding on the trade-off between performance/interop/etc. that you need.
If performance isn't critical and you need interop with other Web Service technologies, you will want to use the WS-HTTP binding. For your case, you can use WCF with either a net-tcp binding, or a named-pipe binding. Either should work.
My personal take is that the WCF approach is more clean as you can do Contract-Driven services and focus on messages, not objects (I'm making a generalization here based on the default programming models of WCF/.NET Remoting). I don't like sending objects across the wire because a lot of semantic information gets lost or is not clear. When all you are doing is sending a message like you are with WCF, it becomes easier to separate your concerns between communication and the classes/infrastructure that a single node is composed of.
A: Remoting in .NET Framework 2.0 provides the IPC channel for inter-process communication within the same machine.
A: WCF also provides flexibility. By just changing some config (binding) you can have the same service on some other machine instead of IPC on same machine. Therefore your code remains flexible.
A: If it's on a single machine, Named Pipes gives you better performance and can be implemented with the remoting infrastructure as well as WCF. Or you can just directly use System.IO.Pipes.
A: .Net remoting isn't a protocol in and of itself. It lets you pick which protocal to use: SOAP, named-pipes, etc.
A: .net remoting is built into .net to do inner process communication. If you use that, they will continue to support and possibly enhance it in future versions. Named pipes doesn't give you the promise of enhancements in future versions of .net
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
}
|
Q: Writing data over RxTx using usbserial? I'm using the RxTx library over usbserial on a Linux distro. The RxTx lib seems to behave quite differently (in a bad way) than how it works over serial.
One of my biggest problems is that the RxTx SerialPortEvent.OUTPUT_BUFFER_EMPTY does not work on linux over usb serial.
How do I know when I should write to the stream? Any indicators I might have missed?
So far my experience with writing and reading concurrently have not been great. Does anyone know if I should lock the DATA_AVAILABLE handler from being invoked while I'm writing on the stream? Or RxTx accepts concurrent read/writes?
A: (perhaps slightly off-topic, but here goes)
I'm not familiar with that particular library, but I can assure you from dire experience (I work in the security systems (as in: hardware security devices) business, where RS-232 is heavily used) that not all USB-serial converters are born equal. Many such devices so not properly emulate all RS-232 lines, and many don't even handle any comms without flow control. Before blaming the library, try to confirm that the hardware actually does what it's supposed to do.
Without wanting to endorse a particular product or brand, the best (as in: least buggy) USB-serial converter I have come across in years is the USA-19HS.
A: Using RxTx over usb-to-serial you can't set notifyOnOutput to true otherwise it locks up completely.
I've learned this the hard way. This problem is documented on a few web sites over the internet.
I'm running it on Linux and I believe that this is a Linux only issue, although I can't confirm that.
As for the link you've given me... I've seen the SimpleReader and SimpleWriter examples, but these don't represent a real world application. It is not multi-threaded, assumes a read has the full data it needs instead of buffering reads, etc.
Thanks,
Jeach!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to communicate with a windows service from an application that interacts with the desktop? With .Net what is the best way to interact with a service (i.e. how do most tray-apps communicate with their servers). It would be preferred if this method would be cross-platform as well (working in Mono, so I guess remoting is out?)
Edit:
Forgot to mention, we still have to support Windows 2000 machines in the field, so WCF and anything above .Net 2.0 won't fly.
A: If this is a tray app, and not a true service, be wary of how you set up your communications if using pipes or TCP/IP. If multiple users are logged into a machine (Citrix, Remote Desktop), and each user launches a tray app "service", then you can run into a situation where you have multiple processes trying to use the same well known port or pipe. Of course this isn't a problem if you don't plan on supporting multiple pipes or if you have a true service as opposed to a tray app that runs in each user shell.
A: Have your service listen to 127.0.0.1 on a predefined port with a plain old TCP stream socket. Connect to that port from your desktop application.
It's dead simple and it's completely cross platform.
A: Did any one of you actually try remoting with Mono? It works just fine. You might bump into some corner cases, but this is highly unlikely. Just test your application for cross-platform (MS.Net <-> Mono) remoting from time to time to catch any possible glitches. And start with a recent Mono, 2.4.2 is current.
A: Remoting is an option, but it's not cross-platform. Some other ways are to use named pipes, IPC, or kernel events.
A: Funnily enough I was going to suggest Remoting! The Mono 1.0 Release Notes (from archive.org because the original location is missing) mention System.Runtime.Remoting.dll as a supported library and doesn't say anything about known issues.
If remoting is out then you probably have to implement your own TCP message framing protocol. Windows doesn't have an equivalent of UNIX-domain sockets for communication on the same machine.
A: Most services that have a GUI component are run as a named user, and are allowed access to the desktop. This lets you access it via COM or .NET but only locally (unless you want to get complicated)
Personally, I open an ordinary old socket on the service - its cross platform, allows multiple clients, allows any app to access it, doesn't rely on Windows security to be opened up for it, and allows your GUI to be written in any language you like (as everything supports sockets).
For a tray app, you'd want a simle protocol to communicate - you might as well use a REST style system to send commands to it, and stream XML (yuk) or a custom data format back.
A: Be aware that if you are planning to eventually deploy on Windows Vista or Windows Server 2008, many ways that this can be done today will not work. This is because of the introduction of a new security feature called "Session 0 Isolation".
Most windows services have been moved to run in Session 0 now in order to properly isolate them from the rest of the system. An extension of this is that the first user to login to the system no longer is placed in Session #0, they are placed in Session 1. And hence, the isolation will break code that does certain types of communication between services and desktop applications.
The best way to write code today that will work on Vista and Server 2008 going forward when doing communication between services and applications is to use a proper cross-process API like RPC, Named Pipes, etc. Do not use SendMessage/PostMessage as that will fail under Session 0 Isolation.
http://www.microsoft.com/whdc/system/vista/services.mspx
Now, given your requirements, you are going to be in a bit of a pickle. For the cross-platform concerns, I'm not sure if Remoting would be supported. You may have to drop down and go all the way back to sockets: http://msdn.microsoft.com/en-us/library/system.net.sockets.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: How do you maintain large t-sql procedures I'm about to inherit a set of large and complex set of stored procedures that do monthly processing on very large sets of data.
We are in the process of debugging them so they match the original process which was written in VB6. The reason they decided to re write them in t-sql is because the vb process takes days and this new process takes hours.
All this is fine, but how can I make these now massive chunks of t-sql code(1.5k+ lines) even remotely readable / maintainable.
Any experience making t-sql not much of head ache is very welcome.
A: First, create a directory full of .sql files and maintain them there. Add this set of .sql files to a revision control system. SVN works well. Have a tool that loads these into your database, overwriting any existing ones.
Have a testing database, and baseline reports showing what the output of the monthly processing should look like. Your tests should also be in the form of .sql files under version control.
You can now refactor your procs as much as you like, and run your tests afterward to confirm correct function.
A: For formatting/pretty-fying SQL, I've had success with http://www.sqlinform.com/ - free online version you can try out, and a desktop version available too.
SQLinForm is an automatic SQL code formatter for all major databases ( ORACLE, SQL Server, DB2 / UDB, Sybase, Informix, PostgreSQL, MySQL etc) with many formatting options.
A: Definately start by reformatting the code, especially indentations.
Then modularise the SQL. Pull out chunks into smaller, descriptively named procedures and functions in their own stand alone files. This alone I find works very well with improving my understanding of large SQL files.
A: ApexSQLScript is a great tool for scripting out an entire database - you can then check that into source control and manage changes.
I've also found that documenting the sprocs consistently lets you pull out information about them using the data about the source code in sys.sql_modules - you can use tags or whatever to help document subsystems.
Also, use Schemas (or even multiple databases) - this will really help divide up your database into logical units and point out architectural issues.
As far as large code, I've recently found the SQL2005 CTE feature to be very useful in managing code with lots of nested queries (not even recursive). Instead of managing a bunch of nesting and indentation, CTEs can be declared and built up and then used in the final statement. This also helps in refactoring as it seems a lot easier to remove redundant nested queries and columns.
Stored Procs and UDFs are vital for managing a large code base and eliminating dark corners. I have not found views to be terribly helpful because they are not parameterizable (UDFs can be used in these cases if the result sets are small).
A: Try to modularise the SQL as much as possible and have a set of tests which will enable you to maintain, refactor and add features when needed. I once had the pleasure of inheriting a Stored Proc that totalled 5000 lines and I still have nightmares about it. Once the project was over I printed out the stored proc for a laugh destorying X trees in the process. During one of our companies weekly stand up sessions I laid it out end to end and it streched the entire length of the building. Ised this as an example of how not to write and maintain stored procedures.
A: One thing that you can do is have an automated script to store all changes to source control so that you can review changes to the procedures (using a diff on the previous and current versions)
A: It's definitely not free, but for keeping your T-SQL formatted in a consistent way, Redgate Software's SQL Prompt is very handy. As long as your proc's syntax is correct, a couple of keystrokes (Ctrl+K,Y) will reformat it all instantly. The options give you a lot of control over how your SQL is formatted.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/84880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.