text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: What’s in your PowerShell `profile.ps1` file? What essential things (functions, aliases, start up scripts) do you have in your profile?
A: # ----------------------------------------------------------
# msdn search for win32 APIs.
# ----------------------------------------------------------
function Search-MSDNWin32
{
$url = 'http://search.msdn.microsoft.com/?query=';
$url += $args[0];
for ($i = 1; $i -lt $args.count; $i++) {
$url += '+';
$url += $args[$i];
}
$url += '&locale=en-us&refinement=86&ac=3';
Open-IE($url);
}
# ----------------------------------------------------------
# Open Internet Explorer given the url.
# ----------------------------------------------------------
function Open-IE ($url)
{
$ie = new-object -comobject internetexplorer.application;
$ie.Navigate($url);
$ie.Visible = $true;
}
A: I rock a few functions, and since I'm a module author I typically load a console and desperately need to know what's where.
write-host "Your modules are..." -ForegroundColor Red
Get-module -li
Die hard nerding:
function prompt
{
$host.UI.RawUI.WindowTitle = "ShellPower"
# Need to still show the working directory.
#Write-Host "You landed in $PWD"
# Nerd up, yo.
$Str = "Root@The Matrix"
"$str> "
}
The mandatory anything I can PowerShell I will functions go here...
# Explorer command
function Explore
{
param
(
[Parameter(
Position = 0,
ValueFromPipeline = $true,
Mandatory = $true,
HelpMessage = "This is the path to explore..."
)]
[ValidateNotNullOrEmpty()]
[string]
# First parameter is the path you're going to explore.
$Target
)
$exploration = New-Object -ComObject shell.application
$exploration.Explore($Target)
}
I am STILL an administrator so I do need...
Function RDP
{
param
(
[Parameter(
Position = 0,
ValueFromPipeline = $true,
Mandatory = $true,
HelpMessage = "Server Friendly name"
)]
[ValidateNotNullOrEmpty()]
[string]
$server
)
cmdkey /generic:TERMSRV/$server /user:$UserName /pass:($Password.GetNetworkCredential().Password)
mstsc /v:$Server /f /admin
Wait-Event -Timeout 5
cmdkey /Delete:TERMSRV/$server
}
Sometimes I want to start explorer as someone other than the logged in user...
# Restarts explorer as the user in $UserName
function New-Explorer
{
# CLI prompt for password
taskkill /f /IM Explorer.exe
runas /noprofile /netonly /user:$UserName explorer
}
This is just because it's funny.
Function Lock-RemoteWorkstation
{
param(
$Computername,
$Credential
)
if(!(get-module taskscheduler))
{
Import-Module TaskScheduler
}
New-task -ComputerName $Computername -credential:$Credential |
Add-TaskTrigger -In (New-TimeSpan -Seconds 30) |
Add-TaskAction -Script `
{
$signature = @"
[DllImport("user32.dll", SetLastError = true)]
public static extern bool LockWorkStation();
"@
$LockWorkStation = Add-Type -memberDefinition $signature -name "Win32LockWorkStation" -namespace Win32Functions -passthru
$LockWorkStation::LockWorkStation() | Out-Null
} | Register-ScheduledTask TestTask -ComputerName $Computername -credential:$Credential
}
I also have one for me, since Win + L is too far away...
Function llm # Lock Local machine
{
$signature = @"
[DllImport("user32.dll", SetLastError = true)]
public static extern bool LockWorkStation();
"@
$LockWorkStation = Add-Type -memberDefinition $signature -name "Win32LockWorkStation" -namespace Win32Functions -passthru
$LockWorkStation::LockWorkStation() | Out-Null
}
A few filters? I think so...
filter FileSizeBelow($size){if($_.length -le $size){ $_ }}
filter FileSizeAbove($size){if($_.Length -ge $size){$_}}
I also have a few I can't post yet, because they're not done but they're basically a way to persist credentials between sessions without writing them out as an encrypted file.
A: Here's my not so subtle profile
#==============================================================================
# Jared Parsons PowerShell Profile (jaredp@rantpack.org)
#==============================================================================
#==============================================================================
# Common Variables Start
#==============================================================================
$global:Jsh = new-object psobject
$Jsh | add-member NoteProperty "ScriptPath" $(split-path -parent $MyInvocation.MyCommand.Definition)
$Jsh | add-member NoteProperty "ConfigPath" $(split-path -parent $Jsh.ScriptPath)
$Jsh | add-member NoteProperty "UtilsRawPath" $(join-path $Jsh.ConfigPath "Utils")
$Jsh | add-member NoteProperty "UtilsPath" $(join-path $Jsh.UtilsRawPath $env:PROCESSOR_ARCHITECTURE)
$Jsh | add-member NoteProperty "GoMap" @{}
$Jsh | add-member NoteProperty "ScriptMap" @{}
#==============================================================================
#==============================================================================
# Functions
#==============================================================================
# Load snapin's if they are available
function Jsh.Load-Snapin([string]$name) {
$list = @( get-pssnapin | ? { $_.Name -eq $name })
if ( $list.Length -gt 0 ) {
return;
}
$snapin = get-pssnapin -registered | ? { $_.Name -eq $name }
if ( $snapin -ne $null ) {
add-pssnapin $name
}
}
# Update the configuration from the source code server
function Jsh.Update-WinConfig([bool]$force=$false) {
# First see if we've updated in the last day
$target = join-path $env:temp "Jsh.Update.txt"
$update = $false
if ( test-path $target ) {
$last = [datetime] (gc $target)
if ( ([DateTime]::Now - $last).Days -gt 1) {
$update = $true
}
} else {
$update = $true;
}
if ( $update -or $force ) {
write-host "Checking for winconfig updates"
pushd $Jsh.ConfigPath
$output = @(& svn update)
if ( $output.Length -gt 1 ) {
write-host "WinConfig updated. Re-running configuration"
cd $Jsh.ScriptPath
& .\ConfigureAll.ps1
. .\Profile.ps1
}
sc $target $([DateTime]::Now)
popd
}
}
function Jsh.Push-Path([string] $location) {
go $location $true
}
function Jsh.Go-Path([string] $location, [bool]$push = $false) {
if ( $location -eq "" ) {
write-output $Jsh.GoMap
} elseif ( $Jsh.GoMap.ContainsKey($location) ) {
if ( $push ) {
push-location $Jsh.GoMap[$location]
} else {
set-location $Jsh.GoMap[$location]
}
} elseif ( test-path $location ) {
if ( $push ) {
push-location $location
} else {
set-location $location
}
} else {
write-output "$loctaion is not a valid go location"
write-output "Current defined locations"
write-output $Jsh.GoMap
}
}
function Jsh.Run-Script([string] $name) {
if ( $Jsh.ScriptMap.ContainsKey($name) ) {
. $Jsh.ScriptMap[$name]
} else {
write-output "$name is not a valid script location"
write-output $Jsh.ScriptMap
}
}
# Set the prompt
function prompt() {
if ( Test-Admin ) {
write-host -NoNewLine -f red "Admin "
}
write-host -NoNewLine -ForegroundColor Green $(get-location)
foreach ( $entry in (get-location -stack)) {
write-host -NoNewLine -ForegroundColor Red '+';
}
write-host -NoNewLine -ForegroundColor Green '>'
' '
}
#==============================================================================
#==============================================================================
# Alias
#==============================================================================
set-alias gcid Get-ChildItemDirectory
set-alias wget Get-WebItem
set-alias ss select-string
set-alias ssr Select-StringRecurse
set-alias go Jsh.Go-Path
set-alias gop Jsh.Push-Path
set-alias script Jsh.Run-Script
set-alias ia Invoke-Admin
set-alias ica Invoke-CommandAdmin
set-alias isa Invoke-ScriptAdmin
#==============================================================================
pushd $Jsh.ScriptPath
# Setup the go locations
$Jsh.GoMap["ps"] = $Jsh.ScriptPath
$Jsh.GoMap["config"] = $Jsh.ConfigPath
$Jsh.GoMap["~"] = "~"
# Setup load locations
$Jsh.ScriptMap["profile"] = join-path $Jsh.ScriptPath "Profile.ps1"
$Jsh.ScriptMap["common"] = $(join-path $Jsh.ScriptPath "LibraryCommon.ps1")
$Jsh.ScriptMap["svn"] = $(join-path $Jsh.ScriptPath "LibrarySubversion.ps1")
$Jsh.ScriptMap["subversion"] = $(join-path $Jsh.ScriptPath "LibrarySubversion.ps1")
$Jsh.ScriptMap["favorites"] = $(join-path $Jsh.ScriptPath "LibraryFavorites.ps1")
$Jsh.ScriptMap["registry"] = $(join-path $Jsh.ScriptPath "LibraryRegistry.ps1")
$Jsh.ScriptMap["reg"] = $(join-path $Jsh.ScriptPath "LibraryRegistry.ps1")
$Jsh.ScriptMap["token"] = $(join-path $Jsh.ScriptPath "LibraryTokenize.ps1")
$Jsh.ScriptMap["unit"] = $(join-path $Jsh.ScriptPath "LibraryUnitTest.ps1")
$Jsh.ScriptMap["tfs"] = $(join-path $Jsh.ScriptPath "LibraryTfs.ps1")
$Jsh.ScriptMap["tab"] = $(join-path $Jsh.ScriptPath "TabExpansion.ps1")
# Load the common functions
. script common
. script tab
$global:libCommonCertPath = (join-path $Jsh.ConfigPath "Data\Certs\jaredp_code.pfx")
# Load the snapin's we want
Jsh.Load-Snapin "pscx"
Jsh.Load-Snapin "JshCmdlet"
# Setup the Console look and feel
$host.UI.RawUI.ForegroundColor = "Yellow"
if ( Test-Admin ) {
$title = "Administrator Shell - {0}" -f $host.UI.RawUI.WindowTitle
$host.UI.RawUI.WindowTitle = $title;
}
# Call the computer specific profile
$compProfile = join-path "Computers" ($env:ComputerName + "_Profile.ps1")
if ( -not (test-path $compProfile)) { ni $compProfile -type File | out-null }
write-host "Computer profile: $compProfile"
. ".\$compProfile"
$Jsh.ScriptMap["cprofile"] = resolve-path ($compProfile)
# If the computer name is the same as the domain then we are not
# joined to active directory
if ($env:UserDomain -ne $env:ComputerName ) {
# Call the domain specific profile data
write-host "Domain $env:UserDomain"
$domainProfile = join-path $env:UserDomain "Profile.ps1"
if ( -not (test-path $domainProfile)) { ni $domainProfile -type File | out-null }
. ".\$domainProfile"
}
# Run the get-fortune command if JshCmdlet was loaded
if ( get-command "get-fortune" -ea SilentlyContinue ) {
get-fortune -timeout 1000
}
# Finished with the profile, go back to the original directory
popd
# Look for updates
Jsh.Update-WinConfig
# Because this profile is run in the same context, we need to remove any
# variables manually that we don't want exposed outside this script
A: apropos.
Although I think this has been superseded by a recent or upcoming release.
##############################################################################
## Search the PowerShell help documentation for a given keyword or regular
## expression.
##
## Example:
## Get-HelpMatch hashtable
## Get-HelpMatch "(datetime|ticks)"
##############################################################################
function apropos {
param($searchWord = $(throw "Please specify content to search for"))
$helpNames = $(get-help *)
foreach($helpTopic in $helpNames)
{
$content = get-help -Full $helpTopic.Name | out-string
if($content -match $searchWord)
{
$helpTopic | select Name,Synopsis
}
}
}
A: I keep a little bit of everything. Mostly, my profile sets up all the environment (including calling scripts to set up my .NET/VS and Java development environment).
I also redefine the prompt() function with my own style (see it in action), set up several aliases to other scripts and commands. and change what $HOME points to.
Here's my complete profile script.
A: i add this function so that i can see disk usage easily:
function df {
$colItems = Get-wmiObject -class "Win32_LogicalDisk" -namespace "root\CIMV2" `
-computername localhost
foreach ($objItem in $colItems) {
write $objItem.DeviceID $objItem.Description $objItem.FileSystem `
($objItem.Size / 1GB).ToString("f3") ($objItem.FreeSpace / 1GB).ToString("f3")
}
}
A: Set-PSDebug -Strict
You will benefit i you ever searched for a stupid Typo eg. outputting $varsometext instead $var sometext
A: ##############################################################################
# Get an XPath Navigator object based on the input string containing xml
function get-xpn ($text) {
$rdr = [System.IO.StringReader] $text
$trdr = [system.io.textreader]$rdr
$xpdoc = [System.XML.XPath.XPathDocument] $trdr
$xpdoc.CreateNavigator()
}
Useful for working with xml, such as output from svn commands with --xml.
A: This creates a scripts: drive and adds it to your path. Note, you must create the folder yourself. Next time you need to get back to it, just type "scripts:" and hit enter, just like any drive letter in Windows.
$env:path += ";$profiledir\scripts"
New-PSDrive -Name Scripts -PSProvider FileSystem -Root $profiledir\scripts
A: This will add snapins you have installed into your powershell session. The reason you may want to do something like this is that it's easy to maintain, and works well if you sync your profile across multiple systems. If a snapin isn't installed, you won't see an error message.
---------------------------------------------------------------------------
Add third-party snapins
---------------------------------------------------------------------------
$snapins = @(
"Quest.ActiveRoles.ADManagement",
"PowerGadgets",
"VMware.VimAutomation.Core",
"NetCmdlets"
)
$snapins | ForEach-Object {
if ( Get-PSSnapin -Registered $_ -ErrorAction SilentlyContinue ) {
Add-PSSnapin $_
}
}
A: I put all my functions and aliases in separate script files and then dot source them in my profile:
. c:\scripts\posh\jdh-functions.ps1
A: I often find myself needing needing some basic agregates to count/sum some things., I've defined these functions and use them often, they work really nicely at the end of a pipeline :
#
# useful agregate
#
function count
{
BEGIN { $x = 0 }
PROCESS { $x += 1 }
END { $x }
}
function product
{
BEGIN { $x = 1 }
PROCESS { $x *= $_ }
END { $x }
}
function sum
{
BEGIN { $x = 0 }
PROCESS { $x += $_ }
END { $x }
}
function average
{
BEGIN { $max = 0; $curr = 0 }
PROCESS { $max += $_; $curr += 1 }
END { $max / $curr }
}
To be able to get time and path with colors in my prompt :
function Get-Time { return $(get-date | foreach { $_.ToLongTimeString() } ) }
function prompt
{
# Write the time
write-host "[" -noNewLine
write-host $(Get-Time) -foreground yellow -noNewLine
write-host "] " -noNewLine
# Write the path
write-host $($(Get-Location).Path.replace($home,"~").replace("\","/")) -foreground green -noNewLine
write-host $(if ($nestedpromptlevel -ge 1) { '>>' }) -noNewLine
return "> "
}
The following functions are stolen from a blog and modified to fit my taste, but ls with colors is very nice :
# LS.MSH
# Colorized LS function replacement
# /\/\o\/\/ 2006
# http://mow001.blogspot.com
function LL
{
param ($dir = ".", $all = $false)
$origFg = $host.ui.rawui.foregroundColor
if ( $all ) { $toList = ls -force $dir }
else { $toList = ls $dir }
foreach ($Item in $toList)
{
Switch ($Item.Extension)
{
".Exe" {$host.ui.rawui.foregroundColor = "Yellow"}
".cmd" {$host.ui.rawui.foregroundColor = "Red"}
".msh" {$host.ui.rawui.foregroundColor = "Red"}
".vbs" {$host.ui.rawui.foregroundColor = "Red"}
Default {$host.ui.rawui.foregroundColor = $origFg}
}
if ($item.Mode.StartsWith("d")) {$host.ui.rawui.foregroundColor = "Green"}
$item
}
$host.ui.rawui.foregroundColor = $origFg
}
function lla
{
param ( $dir=".")
ll $dir $true
}
function la { ls -force }
And some shortcuts to avoid really repetitive filtering tasks :
# behave like a grep command
# but work on objects, used
# to be still be allowed to use grep
filter match( $reg )
{
if ($_.tostring() -match $reg)
{ $_ }
}
# behave like a grep -v command
# but work on objects
filter exclude( $reg )
{
if (-not ($_.tostring() -match $reg))
{ $_ }
}
# behave like match but use only -like
filter like( $glob )
{
if ($_.toString() -like $glob)
{ $_ }
}
filter unlike( $glob )
{
if (-not ($_.tostring() -like $glob))
{ $_ }
}
A: The function to view the entire history of typed command (Get-History, and his alias h show default only 32 last commands):
function ha {
Get-History -count $MaximumHistoryCount
}
A: You can see my PowerShell profile at http://github.com/jamesottaway/windowspowershell
If you use Git to clone my repo into your Documents folder (or whatever folder is above 'WindowsPowerShell' in your $PROFILE variable), you'll get all of my goodness.
The main profile.ps1 sets the subfolder with the name Addons as a PSDrive, and then finds all .ps1 files underneath that folder to load.
I quite like the go command, which stores a dictionary of shorthand locations to visit easily. For example, go vsp will take me to C:\Visual Studio 2008\Projects.
I also like overriding the Set-Location cmdlet to run both Set-Location and Get-ChildItem.
My other favourite is being able to do a mkdir which does Set-Location xyz after running New-Item xyz -Type Directory.
A: I actually keep mine on github.
A: Function funcOpenPowerShellProfile
{
Notepad $PROFILE
}
Set-Alias fop funcOpenPowerShellProfile
Only a sagaciously-lazy individual would tell you that fop is so much easier to type than Notepad $PROFILE at the prompt, unless, of course, you associate "fop" with a 17th century English ninny.
If you wanted, you could take it a step further and make it somewhat useful:
Function funcOpenPowerShellProfile
{
$fileProfileBackup = $PROFILE + '.bak'
cp $PROFILE $fileProfileBackup
PowerShell_ISE $PROFILE # Replace with Desired IDE/ISE for Syntax Highlighting
}
Set-Alias fop funcOpenPowerShellProfile
For satisfying survivalist-paranoia:
Function funcOpenPowerShellProfile
{
$fileProfilePathParts = @($PROFILE.Split('\'))
$fileProfileName = $fileProfilePathParts[-1]
$fileProfilePathPartNum = 0
$fileProfileHostPath = $fileProfilePathParts[$fileProfilePathPartNum] + '\'
$fileProfileHostPathPartsCount = $fileProfilePathParts.Count - 2
# Arrays start at 0, but the Count starts at 1; if both started at 0 or 1,
# then a -1 would be fine, but the realized discrepancy is 2
Do
{
$fileProfilePathPartNum++
$fileProfileHostPath = $fileProfileHostPath + `
$fileProfilePathParts[$fileProfilePathPartNum] + '\'
}
While
(
$fileProfilePathPartNum -LT $fileProfileHostPathPartsCount
)
$fileProfileBackupTime = [string](date -format u) -replace ":", ""
$fileProfileBackup = $fileProfileHostPath + `
$fileProfileBackupTime + ' - ' + $fileProfileName + '.bak'
cp $PROFILE $fileProfileBackup
cd $fileProfileHostPath
$fileProfileBackupNamePattern = $fileProfileName + '.bak'
$fileProfileBackups = @(ls | Where {$_.Name -Match $fileProfileBackupNamePattern} | `
Sort Name)
$fileProfileBackupsCount = $fileProfileBackups.Count
$fileProfileBackupThreshold = 5 # Change as Desired
If
(
$fileProfileBackupsCount -GT $fileProfileBackupThreshold
)
{
$fileProfileBackupsDeleteNum = $fileProfileBackupsCount - `
$fileProfileBackupThreshold
$fileProfileBackupsIndexNum = 0
Do
{
rm $fileProfileBackups[$fileProfileBackupsIndexNum]
$fileProfileBackupsIndexNum++;
$fileProfileBackupsDeleteNum--
}
While
(
$fileProfileBackupsDeleteNum -NE 0
)
}
PowerShell_ISE $PROFILE
# Replace 'PowerShell_ISE' with Desired IDE (IDE's path may be needed in
# '$Env:PATH' for this to work; if you can start it from the "Run" window,
# you should be fine)
}
Set-Alias fop funcOpenPowerShellProfile
A: amongst many other things:
function w {
explorer .
}
opens an explorer window in the current directory
function startover {
iisreset /restart
iisreset /stop
rm "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\*.*" -recurse -force -Verbose
iisreset /start
}
gets rid of everything in my temporary asp.net files (useful for working on managed code that has dependencies on buggy unmanaged code)
function edit($x) {
. 'C:\Program Files (x86)\Notepad++\notepad++.exe' $x
}
edits $x in notepad++
A: Jeffrey Snover's Start-NewScope because re-launching the shell can be a drag.
I never got comfortable with the diruse options, so:
function Get-FolderSizes { # poor man's du
[cmdletBinding()]
param(
[parameter(mandatory=$true)]$Path,
[parameter(mandatory=$false)]$SizeMB,
[parameter(mandatory=$false)]$ExcludeFolders,
[parameter(mandatory=$false)][switch]$AsObject
) #close param
# http://blogs.technet.com/b/heyscriptingguy/archive/2013/01/05/weekend-scripter-sorting-folders-by-size.aspx
# uses Christoph Schneegans' Find-Files https://schneegans.de/windows/find-files/ because "gci -rec" follows junctions in "special" folders
$pathCheck = test-path $path
if (!$pathcheck) { Write-Error "Invalid path. Wants gci's -path parameter."; return }
if (!(Get-Command Find-Files)) { Write-Error "Required function Find-Files not found"; return }
$fso = New-Object -ComObject scripting.filesystemobject
$parents = Get-ChildItem $path -Force | where { $_.PSisContainer -and $ExcludeFolders -notContains $_.name -and !$_.LinkType }
$folders = Foreach ($folder in $parents)
{
$getFolder = $fso.getFolder( $folder.fullname.tostring() )
if (!$getFolder.Size)
{
#for "special folders" like appdata
# maybe "-Attributes !ReparsePoint" works in v6? https://stackoverflow.com/a/59952913/
# what about https://superuser.com/a/650476/ ?
# abandoned because it follows junctions, distorting results # $length = gci $folder.FullName -Recurse -Force -EA SilentlyContinue | Measure -Property Length -Sum
$length = Find-Files $folder.FullName -EA SilentlyContinue | Measure -Property Length -Sum -EA SilentlyContinue
$sizeMBs = "{0:N0}" -f ($length.Sum /1mb)
} #close if size property is null
else { $sizeMBs = "{0:N0}" -f ($getFolder.size /1mb) }
New-Object -TypeName psobject -Property @{
Name = $getFolder.Path
SizeMB = $sizeMBs
} #close new obj property
} #close foreach folder
#here's the output
$foldersObj = $folders | Sort @{E={[decimal]$_.SizeMB}} -Descending | ? {[Decimal]$_.SizeMB -gt $SizeMB}
if (!$AsObject) { $foldersObj | Format-Table -AutoSize } else { $foldersObj }
#calculate the total including contents
$sum = $folders | Select -Expand SizeMB | Measure -Sum | Select -Expand Sum
$sum += ( gci $path | where {!$_.psIsContainer} | Measure -Property Length -Sum | Select -Expand Sum ) / 1mb
$sumString = "{0:n2}" -f ($sum /1kb)
$sumString + " GB total"
} #end function
Set-Alias gfs Get-FolderSizes
function Find-Files
{
<# by Christoph Schneegans https://schneegans.de/windows/find-files/ - used in Get-FolderSizes aka gfs
.SYNOPSIS
Lists the contents of a directory. Unlike Get-ChildItem, this function does not recurse into symbolic links or junctions in order to avoid infinite loops.
#>
param (
[Parameter( Mandatory=$false )]
[string]
# Specifies the path to the directory whose contents are to be listed. By default, the current working directory is used.
$LiteralPath = (Get-Location),
[Parameter( Mandatory=$false )]
# Specifies a filter that is applied to each file or directory. Wildcards ? and * are supported.
$Filter,
[Parameter( Mandatory=$false )]
[boolean]
# Specifies if file objects should be returned. By default, all file system objects are returned.
$File = $true,
[Parameter( Mandatory=$false )]
[boolean]
# Specifies if directory objects should be returned. By default, all file system objects are returned.
$Directory = $true,
[Parameter( Mandatory=$false )]
[boolean]
# Specifies if reparse point objects should be returned. By default, all file system objects are returned.
$ReparsePoint = $true,
[Parameter( Mandatory=$false )]
[boolean]
# Specifies if the top directory should be returned. By default, all file system objects are returned.
$Self = $true
)
function Enumerate( [System.IO.FileSystemInfo] $Item ) {
$Item;
if ( $Item.GetType() -eq [System.IO.DirectoryInfo] -and ! $Item.Attributes.HasFlag( [System.IO.FileAttributes]::ReparsePoint ) ) {
foreach ($ChildItem in $Item.EnumerateFileSystemInfos() ) {
Enumerate $ChildItem;
}
}
}
function FilterByName {
process {
if ( ( $Filter -eq $null ) -or ( $_.Name -ilike $Filter ) ) {
$_;
}
}
}
function FilterByType {
process {
if ( $_.GetType() -eq [System.IO.FileInfo] ) {
if ( $File ) { $_; }
} elseif ( $_.Attributes.HasFlag( [System.IO.FileAttributes]::ReparsePoint ) ) {
if ( $ReparsePoint ) { $_; }
} else {
if ( $Directory ) { $_; }
}
}
}
$Skip = if ($Self) { 0 } else { 1 };
Enumerate ( Get-Item -LiteralPath $LiteralPath -Force ) | Select-Object -Skip $Skip | FilterByName | FilterByType;
} # end function find-files
The most valuable bit above is Christoph Schneegans' Find-Files https://schneegans.de/windows/find-files
For pointing at stuff:
function New-URLfile {
param( [parameter(mandatory=$true)]$Target, [parameter(mandatory=$true)]$Link )
if ($target -match "^\." -or $link -match "^\.") {"Full paths plz."; break}
$content = @()
$header = '[InternetShortcut]'
$content += $header
$content += "URL=" + $target
$content | out-file $link
ii $link
} #end function
function New-LNKFile {
param( [parameter(mandatory=$true)]$Target, [parameter(mandatory=$true)]$Link )
if ($target -match "^\." -or $link -match "^\.") {"Full paths plz."; break}
$WshShell = New-Object -comObject WScript.Shell
$Shortcut = $WshShell.CreateShortcut($link)
$Shortcut.TargetPath = $target
$shortCut.save()
} #end function new-lnkfile
Poor man's grep? For searching large txt files.
function Search-TextFile {
param(
[parameter(mandatory=$true)]$File,
[parameter(mandatory=$true)]$SearchText
) #close param
if ( !(Test-path $File) )
{
Write-Error "File not found: $file"
return
}
$fullPath = Resolve-Path $file | select -Expand ProviderPath
$lines = [System.IO.File]::ReadLines($fullPath)
foreach ($line in $lines) { if ($line -match $SearchText) {$line} }
} #end function Search-TextFile
Set-Alias stf Search-TextFile
Lists programs installed on a remote computer.
function Get-InstalledProgram { [cmdletBinding()] #http://blogs.technet.com/b/heyscriptingguy/archive/2011/11/13/use-powershell-to-quickly-find-installed-software.aspx
param( [parameter(mandatory=$true)]$Comp,[parameter(mandatory=$false)]$Name )
$keys = 'SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall','SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall'
TRY { $RegBase = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey([Microsoft.Win32.RegistryHive]::LocalMachine,$Comp) }
CATCH {
$rrSvc = gwmi win32_service -comp $comp -Filter {name='RemoteRegistry'}
if (!$rrSvc) {"Unable to connect. Make sure that this computer is on the network, has remote administration enabled, `nand that both computers are running the remote registry service."; break}
#Enable and start RemoteRegistry service
if ($rrSvc.State -ne 'Running') {
if ($rrSvc.StartMode -eq 'Disabled') { $null = $rrSvc.ChangeStartMode('Manual'); $undoMe2 = $true }
$null = $rrSvc.StartService() ; $undoMe = $true
} #close if rrsvc not running
else {"Unable to connect. Make sure that this computer is on the network, has remote administration enabled, `nand that both computers are running the remote registry service."; break}
$RegBase = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey([Microsoft.Win32.RegistryHive]::LocalMachine,$Comp)
} #close if failed to connect regbase
$out = @()
foreach ($key in $keys) {
if ( $RegBase.OpenSubKey($Key) ) { #avoids errors on 32bit OS
foreach ( $entry in $RegBase.OpenSubKey($Key).GetSubkeyNames() ) {
$sub = $RegBase.OpenSubKey( ($key + '\' + $entry) )
if ($sub) { $row = $null
$row = [pscustomobject]@{
Name = $RegBase.OpenSubKey( ($key + '\' + $entry) ).GetValue('DisplayName')
InstallDate = $RegBase.OpenSubKey( ($key + '\' + $entry) ).GetValue('InstallDate')
Version = $RegBase.OpenSubKey( ($key + '\' + $entry) ).GetValue('DisplayVersion')
} #close row
$out += $row
} #close if sub
} #close foreach entry
} #close if key exists
} #close foreach key
$out | where {$_.name -and $_.name -match $Name}
if ($undoMe) { $null = $rrSvc.StopService() }
if ($undoMe2) { $null = $rrSvc.ChangeStartMode('Disabled') }
} #end function
Going meta, spreading the gospel, whatnot
function Copy-ProfilePS1 ($Comp,$User) {
if (!$User) {$User = $env:USERNAME}
$targ = "\\$comp\c$\users\$User\Documents\WindowsPowershell\"
if (Test-Path $targ)
{
$cmd = "copy /-Y $profile $targ"
cmd /c $cmd
} else {"Path not found! $targ"}
} #end function CopyProfilePS1
A: This iterates through a scripts PSDrive and dot-sources everything that begins with "lib-".
### ---------------------------------------------------------------------------
### Load function / filter definition library
### ---------------------------------------------------------------------------
Get-ChildItem scripts:\lib-*.ps1 | % {
. $_
write-host "Loading library file:`t$($_.name)"
}
A: To setup my Visual Studio build environment from PowerShell I took the VsVars32 from here. and use it all the time.
###############################################################################
# Exposes the environment vars in a batch and sets them in this PS session
###############################################################################
function Get-Batchfile($file)
{
$theCmd = "`"$file`" & set"
cmd /c $theCmd | Foreach-Object {
$thePath, $theValue = $_.split('=')
Set-Item -path env:$thePath -value $theValue
}
}
###############################################################################
# Sets the VS variables for this PS session to use
###############################################################################
function VsVars32($version = "9.0")
{
$theKey = "HKLM:SOFTWARE\Microsoft\VisualStudio\" + $version
$theVsKey = get-ItemProperty $theKey
$theVsInstallPath = [System.IO.Path]::GetDirectoryName($theVsKey.InstallDir)
$theVsToolsDir = [System.IO.Path]::GetDirectoryName($theVsInstallPath)
$theVsToolsDir = [System.IO.Path]::Combine($theVsToolsDir, "Tools")
$theBatchFile = [System.IO.Path]::Combine($theVsToolsDir, "vsvars32.bat")
Get-Batchfile $theBatchFile
[System.Console]::Title = "Visual Studio " + $version + " Windows Powershell"
}
A: start-transcript. This will write out your entire session to a text file. Great for training new hires on how to use Powershell in the environment.
A: My prompt contains:
$width = ($Host.UI.RawUI.WindowSize.Width - 2 - $(Get-Location).ToString().Length)
$hr = New-Object System.String @('-',$width)
Write-Host -ForegroundColor Red $(Get-Location) $hr
Which gives me a divider between commands that's easy to see when scrolling back. It also shows me the current directory without using horizontal space on the line that I'm typing on.
For example:
C:\Users\Jay ----------------------------------------------------------------------------------------------------------
[1] PS>
A: $MaximumHistoryCount=1024
function hist {get-history -count 256 | %{$_.commandline}}
New-Alias which get-command
function guidConverter([byte[]] $gross){ $GUID = "{" + $gross[3].ToString("X2") + `
$gross[2].ToString("X2") + $gross[1].ToString("X2") + $gross[0].ToString("X2") + "-" + `
$gross[5].ToString("X2") + $gross[4].ToString("X2") + "-" + $gross[7].ToString("X2") + `
$gross[6].ToString("X2") + "-" + $gross[8].ToString("X2") + $gross[9].ToString("X2") + "-" +`
$gross[10].ToString("X2") + $gross[11].ToString("X2") + $gross[12].ToString("X2") + `
$gross[13].ToString("X2") + $gross[14].ToString("X2") + $gross[15].ToString("X2") + "}" $GUID }
A: I keep my profile empty. Instead, I have folders of scripts I can navigate to load functionality and aliases into the session. A folder will be modular, with libraries of functions and assemblies. For ad hoc work, I'll have a script to loads aliases and functions. If I want to munge event logs, I'd navigate to a folder scripts\eventlogs and execute
PS > . .\DotSourceThisToLoadSomeHandyEventLogMonitoringFunctions.ps1
I do this because I need to share scripts with others or move them from machine to machine. I like to be able to copy a folder of scripts and assemblies and have it just work on any machine for any user.
But you want a fun collection of tricks. Here's a script that many of my "profiles" depend on. It allows calls to web services that use self signed SSL for ad hoc exploration of web services in development. Yes, I freely mix C# in my powershell scripts.
# Using a target web service that requires SSL, but server is self-signed.
# Without this, we'll fail unable to establish trust relationship.
function Set-CertificateValidationCallback
{
try
{
Add-Type @'
using System;
public static class CertificateAcceptor{
public static void SetAccept()
{
System.Net.ServicePointManager.ServerCertificateValidationCallback = AcceptCertificate;
}
private static bool AcceptCertificate(Object sender,
System.Security.Cryptography.X509Certificates.X509Certificate certificate,
System.Security.Cryptography.X509Certificates.X509Chain chain,
System.Net.Security.SslPolicyErrors policyErrors)
{
Console.WriteLine("Accepting certificate and ignoring any SSL errors.");
return true;
}
}
'@
}
catch {} # Already exists? Find a better way to check.
[CertificateAcceptor]::SetAccept()
}
A: Great question. Because I deal with several different PowerShell hosts, I do a little logging in each of several profiles, just to make the context of any other messages clearer. In profile.ps1, I currently only have that, but I sometimes change it based on context:
if ($PSVersionTable.PsVersion.Major -ge 3) {
Write-Host "Executing $PSCommandPath"
}
My favorite host is the ISE, in Microsoft.PowerShellIse_profile.ps1, I have:
if ($PSVersionTable.PsVersion.Major -ge 3) {
Write-Host "Executing $PSCommandPath"
}
if ( New-PSDrive -ErrorAction Ignore One FileSystem `
(Get-ItemProperty hkcu:\Software\Microsoft\SkyDrive UserFolder).UserFolder) {
Write-Host -ForegroundColor Green "PSDrive One: mapped to local OneDrive/SkyDrive folder"
}
Import-Module PSCX
$PSCX:TextEditor = (get-command Powershell_ISE).Path
$PSDefaultParameterValues = @{
"Get-Help:ShowWindow" = $true
"Help:ShowWindow" = $true
"Out-Default:OutVariable" = "0"
}
#Script Browser Begin
#Version: 1.2.1
Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\System.Windows.Interactivity.dll'
Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\ScriptBrowser.dll'
Add-Type -Path 'C:\Program Files (x86)\Microsoft Corporation\Microsoft Script Browser\BestPractices.dll'
$scriptBrowser = $psISE.CurrentPowerShellTab.VerticalAddOnTools.Add('Script Browser', [ScriptExplorer.Views.MainView], $true)
$scriptAnalyzer = $psISE.CurrentPowerShellTab.VerticalAddOnTools.Add('Script Analyzer', [BestPractices.Views.BestPracticesView], $true)
$psISE.CurrentPowerShellTab.VisibleVerticalAddOnTools.SelectedAddOnTool = $scriptBrowser
#Script Browser End
A: Of everything not already listed, Start-Steroids has to be my favorite, except for maybe Start-Transcript.
(http://www.powertheshell.com/isesteroids2-2/)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "91"
}
|
Q: Is ncurses available for windows? Are there any ncurses libraries in C/C++ for Windows that emulate ncurses in native resizable Win32 windows (not in console mode)?
A: There's an ongoing effort for a PDCurses port:
http://www.mail-archive.com/pdcurses-l@lightlink.com/msg00129.html
http://www.projectpluto.com/win32a.htm
A: Such a thing probably does not exist "as-is". It doesn't really exist on Linux or other UNIX-like operating systems either though.
ncurses is only a library that helps you manage interactions with the underlying terminal environment. But it doesn't provide a terminal emulator itself.
The thing that actually displays stuff on the screen (which in your requirement is listed as "native resizable win32 windows") is usually called a Terminal Emulator. If you don't like the one that comes with Windows (you aren't alone; no person on Earth does) there are a few alternatives. There is Console, which in my experience works sometimes and appears to just wrap an underlying Windows terminal emulator (I don't know for sure, but I'm guessing, since there is a menu option to actually get access to that underlying terminal emulator, and sure enough an old crusty Windows/DOS box appears which mirrors everything in the Console window).
A better option
Another option, which may be more appealing is puttycyg. It hooks in to Putty (which, coming from a Linux background, is pretty close to what I'm used to, and free) but actually accesses an underlying cygwin instead of the Windows command interpreter (CMD.EXE). So you get all the benefits of Putty's awesome terminal emulator, as well as nice ncurses (and many other) libraries provided by cygwin. Add a couple command line arguments to the Shortcut that launches Putty (or the Batch file) and your app can be automatically launched without going through Putty's UI.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
}
|
Q: Java - Console-like web applet Hey, I've been developing an application in the windows console with Java, and want to put it online in all of its console-graphics-glory.
Is there a simple web applet API I can use to port my app over?
I'm just using basic System.out and System.in functionality, but I'm happy to rebuild my I/O wrappers.
I think something along these lines would be a great asset to any beginning Java developers who want to put their work online.
A: Sure, just make into an applet, put a small swing UI on it with a JFrame with two components - one for writing output to, and one for entering inputs from. Embed the applet in the page.
A: I did as Lars suggested and wrote my own.
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import java.io.*;
import java.awt.Font;
public class Applet extends JFrame {
static final long serialVersionUID = 1;
/** Text area for console output. */
protected JTextArea textArea;
/** Text box for user input. */
protected JTextField textBox;
/** "GO" button, in case they don't know to hit enter. */
protected JButton goButton;
protected PrintStream printStream;
protected BufferedReader bufferedReader;
/**
* This function is called when they hit ENTER or click GO.
*/
ActionListener actionListener = new ActionListener() {
public void actionPerformed(ActionEvent actionEvent) {
goButton.setEnabled(false);
SwingUtilities.invokeLater(
new Thread() {
public void run() {
String userInput = textBox.getText();
printStream.println("> "+userInput);
Input.inString = userInput;
textBox.setText("");
goButton.setEnabled(true);
}
}
);
}
};
public void println(final String string) {
SwingUtilities.invokeLater(
new Thread() {
public void run() {
printStream.println(string);
}
}
);
}
public void printmsg(final String string) {
SwingUtilities.invokeLater(
new Thread() {
public void run() {
printStream.print(string);
}
}
);
}
public Applet() throws IOException {
super("My Applet Title");
Container contentPane = getContentPane();
textArea = new JTextArea(30, 60);
JScrollPane jScrollPane = new JScrollPane(textArea);
final JScrollBar jScrollBar = jScrollPane.getVerticalScrollBar();
contentPane.add(BorderLayout.NORTH, jScrollPane);
textArea.setFocusable(false);
textArea.setAutoscrolls(true);
textArea.setFont(new Font("Comic Sans MS", Font.TRUETYPE_FONT, 14));
// TODO This might be overkill
new Thread() {
public void run() {
while(true) {
jScrollBar.setValue(jScrollBar.getMaximum());
try{
Thread.sleep(100);
} catch (Exception e) {}
}
}
}.start();
JPanel panel;
contentPane.add(BorderLayout.CENTER, panel = new JPanel());
panel.add(textBox = new JTextField(55));
textBox.addActionListener(actionListener);
panel.add(goButton = new JButton("GO"));
goButton.addActionListener(actionListener);
pack();
// End of GUI stuff
PipedInputStream inputStream;
PipedOutputStream outputStream;
inputStream = new PipedInputStream();
outputStream = new PipedOutputStream(inputStream);
bufferedReader = new BufferedReader(new InputStreamReader(inputStream, "ISO8859_1"));
printStream = new PrintStream(outputStream);
new Thread() {
public void run() {
try {
String line;
while ((line = bufferedReader.readLine()) != null) {
textArea.append(line+"\n");
}
} catch (IOException ioException) {
textArea.append("ERROR");
}
}
}.start();
}
}
This below code was in a separate class, "Input", which has a static "inString" string.
public static String getString() {
inString = "";
// Wait for input
while (inString == "") {
try{
Thread.sleep(100);
} catch (Exception e) {}
}
return inString;
}
Through-out the lifespan of the project I will probably maintain this code some more, but at this point - it works :)
A: As a premier example of a glorious and incredibly useful cnsole-like webapp, please see goosh, the Google Shell. I cannot imagine browsing the Net without it anymore.
Granted, there's no source code, but you might get out a bit of its magic by using Firebug or so.
Using a TextArea might be a bug-prone approach. Remember that you'll need to do both input and output to this TextArea and that you must thus keep track of cursor position. I would suggest that, if you really do this approach, you abstract away over a plain TextArea (inheritance, maybe?) and use a component that has, e.g. a prompt() to show the prompt and enable input and a also follows the usual shell abstraction of having stdin (an InputStream, that reads from the prompt, but can be bound to, let's say files or so) and stdout and possibly stderr, OutputStreams, bound to the TextArea's text.
It's not an easy task, and I don't know of any library to do it.
A: I remember seenig telnet client applet implementationa around years ago (back when people used telnet). Maybe you could dig them out and modify them.
A: System.out and System.in are statics and therefore evil. You'll need to go through your program replacing them with non-statics ("parameterise from above"). From an applet you can't use System.setOut/setErr/setIn.
Then you're pretty much sorted. An applet. Add a TextArea (or equivalent). Append output to the text area. Write key strokes to the input. Job done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: PHP alias @ function I'm new to PHP and I'm confused seeing some examples calling a function with a @ prefix like @mysql_ping().
What is it for? Googling / searching is not much of a help since @ gets discarded and 'alias' is not good enough keyword.
A: It suppresses the output of error messages.
Contrary to another commentator here, I think that it is good programming practice to use it (especially if you are developing a web app, where the output would be mixed in the html of the output page).
Functions like mysql_connect return a resource identifier, or FALSE on errors. Use @mysql_connect(...) and check the return value.
A: Googling for "php at symbol" suggests that it asks PHP to not display any error messages that the call causes.
A: @ suppresses errors, warnings and notices.
You can use it for good purpose if you complement it with a custom error handler or with due check of $php_errormsg variable so you can handle errors properly.
In my experience, this proper usage is not seen very much and is instead used a lot in the bad way, just to hide errors without acting on them.
More info at http://www.php.net/manual/en/language.operators.errorcontrol.php
A: It suppresses any errors that might otherwise be output.
It is a recipe for pain and hardship, as it inevitably leads to difficulties when an error does occur, you are bound to spend hours tracking down the cause. If the @ operator hadn't been used, then the error would have been found in seconds.
There is no good reason to use it, use the display_errors and error_log ini settings to prevent errors from displaying on a live site, and let them be shown on your dev site.
If there is an error that you don't want to see, you're better off just fixing it than suppressing it!
If it's something in an external lib and outside your control, just write it to the logs, turn off display_errors on production, and live with it. Because there's no telling whether the error you're suppressing now and are happy to live with will ALWAYS be the error that is thrown from there.
@ === BAD
A: Suppress error messages:
http://bytes.com/forum/thread10951.html
A: Prefixing a function with the a symbol stops it triggering the PHP error handler if an error occurs. Bear in mind that you must do all the error handling yourself if you decide to use it.
$test = @file_get_contents('nonexistant.file');
if(!$test)
{
die('Failed');
}
A better practice is to turn display_errors off and use custom error handlers (see Error Exception).
A: Sometimes it is useful- especially if the admin doesn't want you to play with the php environment or the value isn't important and is mainly cosmetic. Do remember, though; it's a workaround, not a panacea.
[...]
.$foutDate = @filemtime($keyring); /* Don't care, as we've already established file */
$f["date"] = $foutDate;
$f["fullDate"] = date("r", $foutDate);
[...]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Wildcards in a Windows hosts file I want to setup my local development machine so that any requests for *.local are redirected to localhost. The idea is that as I develop multiple sites, I can just add vhosts to Apache called site1.local, site2.local etc, and have them all resolve to localhost, while Apache serves a different site accordingly.
I am on Windows XP.
I tried adding
127.0.0.1 *.local
to my c:\windows\system32\drivers\etc\hosts file, also tried:
127.0.0.1 .local
Neither of which seem to work.
I know I can set them up on different port numbers, but that is a pain since it is hard to remember which port is which.
I don't want to have to setup a local DNS server or anything hard, any suggestions?
A: To answer your question, you cannot use wildcards in the hosts file under Windows.
However, if you want to only change the hosts file to make new sites work.... you can configure your Apache like this and you don't have to keep editing it's config:
http://postpostmodern.com/instructional/a-smarter-mamp/
Basically a quick summary based on my setup, add the following to your apache.conf file:
LoadModule vhost_alias_module modules/mod_vhost_alias.so
NameVirtualHost *:80
<Directory "/xampp/sites">
Options Indexes FollowSymLinks Includes ExecCGI
AllowOverride All
Order allow,deny
Allow from all
</Directory>
<VirtualHost *:80>
VirtualDocumentRoot c:/xampp/sites/%-1/%-2+/
</VirtualHost>
This allows me to add an entry like:
127.0.0.1 test.dev
and then make the directory, c:\xampp\sites\dev\test and place the necessary files in there and it just works.
The other option is to use <Directory> tags in apache.conf and reference the pages from http://localhost/project/.
A: To add to the great suggestions already here, XIP.IO is a fantastic wildcard DNS server that's publicly available.
myproject.127.0.0.1.xip.io -- resolves to --> 127.0.0.1
other.project.127.0.0.1.xip.io -- resolves to --> 127.0.0.1
other.machine.10.0.0.1.xip.io -- resolves to --> 10.0.0.1
(The ability to specify non-loopback addresses is fantastic for testing sites on iOS devices where you cannot access a hosts file.)
If you combine this with some of the Apache configuration mentioned in other answers, you can potentially add VirtualHosts with zero setup.
A: Editing the hosts file is less of a pain when you run "ipconfig /flushdns" from the windows command prompt, instead of restarting your computer.
A: I found a posting about Using the Windows Hosts File that also says "No wildcards are allowed."
In the past, I have just added the additional entries to the hosts file, because (as previously said), it's not that much extra work when you already are editing the apache config file.
A: Acrylic DNS Proxy (free, open source) does the job. It creates a proxy DNS server (on your own computer) with its own hosts file. The hosts file accepts wildcards.
Download from the offical website
http://mayakron.altervista.org/support/browse.php?path=Acrylic&name=Home
Configuring Acrylic DNS Proxy
To configure Acrylic DNS Proxy, install it from the above link then go to:
*
*Start
*Programs
*Acrylic DNS Proxy
*Config
*Edit Custom Hosts File (AcrylicHosts.txt)
Add the folowing lines on the end of the file:
127.0.0.1 *.localhost
127.0.0.1 *.local
127.0.0.1 *.lc
Restart the Acrylic DNS Proxy service:
*
*Start
*Programs
*Acrilic DNS Proxy
*Config
*Restart Acrylic Service
You will also need to adjust your DNS setting in you network interface settings:
*
*Start
*Control Panel
*Network and Internet
*Network Connections
*Local Area Connection Properties
*TCP/IPv4
Set "Use the following DNS server address":
Preferred DNS Server: 127.0.0.1
If you then combine this answer with jeremyasnyder's answer (using VirtualDocumentRoot) you can then automatically setup domains/virtual hosts by simply creating a directory.
A: You could talk your network administrator into setting up a domain for you (say 'evilpuppetmaster.hell') and having the wildcard there so that everything (*.evilpuppetmaster.hell') resolves to your IP
A: We have this working using wildcard DNS in our local DNS server: add an A record something like *.local -> 127.0.0.1
I think that your network settings will need to have the chosen domain suffix in the domain suffix search list for machines on the network, so you might want to replace .local with your company's internal domain (e.g. .int) and then add a subdomain like .localhost.int to make it clear what it's for.
So *.localhost.int would resolve to 127.0.0.1 for everybody on the network, and config file settings for all developers would "just work" if endpoints hang off that subdomain e.g. site1.localhost.int, site2.localhost.int This is pretty much the scheme we have introduced.
dnsmasq also looks nice, but I have not tried it yet:
http://ihaveabackup.net/2012/06/28/using-wildcards-in-the-hosts-file/
A: I don't think that it is possible.
You anyway have to modify the apache virtualroot entries every time you add a new site and location, so it's not a big work to syncronise the new name to the Windows vhost file.
Update: please check the next answer and the comments on this answer. This answer is 6 years old and not correct anymore.
A: I have written a simple dns proxy in Python. It will read wildcard entries in /etc/hosts. See here: http://code.google.com/p/marlon-tools/source/browse/tools/dnsproxy/dnsproxy.py
I have tested in Linux & Mac OS X, but not yet in Windows.
A: You may try AngryHosts, which provided a way to support wildcard and regular expression. Actually, it's a hosts file enhancement and management software.
More features can be seen @ http://angryhosts.com/features/
A: I'm using DNSChef to do that.
https://thesprawl.org/projects/dnschef/
You have to download the app, in Linux or Mac you need python to run it. Windows have their own exe.
You must create a ini file with your dns entries, for example
[A]
*.google.com=192.0.2.1
*.local=127.0.0.1
*.devServer1.com=192.0.2.3
Then you must launch the dns application with admin privileges
sudo python dnschef.py --file myfile.ini -q
or in windows
runas dnschef.exe --file myfile.ini -q
Finally you need to setup as your only DNS your local host environment (network, interface, dns or similar or in linux /etc/resolv.conf).
That's it
A: I made this simple tool to take the place of hosts. Regular expressions are supported.
https://github.com/stackia/DNSAgent
A sample configuration:
[
{
"Pattern": "^.*$",
"NameServer": "8.8.8.8"
},
{
"Pattern": "^(.*\\.googlevideo\\.com)|((.*\\.)?(youtube|ytimg)\\.com)$",
"Address": "203.66.168.119"
},
{
"Pattern": "^.*\\.cn$",
"NameServer": "114.114.114.114"
},
{
"Pattern": "baidu.com$",
"Address": "127.0.0.1"
}
]
A: @petah and Acrylic DNS Proxy is the best answer, and at the end he references the ability to do multi-site using an Apache which @jeremyasnyder describes a little further down...
... however, in our case we're testing a multi-tenant hosting system and so most domains we want to test go to the same virtualhost, while a couple others are directed elsewhere.
So in our case, you simply use regex wildcards in the ServerAlias directive, like so...
ServerAlias *.foo.local
A: Here is the total configuration for those trying to accomplish the goal (wildcards in dev environment ie, XAMPP -- this example assumes all sites pointing to same codebase)
hosts file (add an entry)
file: %SystemRoot%\system32\drivers\etc\hosts
127.0.0.1 example.local
httpd.conf configuration (enable vhosts)
file: \XAMPP\etc\httpd.conf
# Virtual hosts
Include etc\extra\httpd-vhosts.conf
httpd-vhosts.conf configuration
file: XAMPP\etc\extra\httpd-vhosts.conf
<VirtualHost *:80>
ServerAdmin admin@example.local
DocumentRoot "\path_to_XAMPP\htdocs"
ServerName example.local
ServerAlias *.example.local
# SetEnv APP_ENVIRONMENT development
# ErrorLog "logs\example.local-error_log"
# CustomLog "logs\example.local-access_log" common
</VirtualHost>
restart apache
create pac file:
save as whatever.pac wherever you want to and then load the file in the browser's network>proxy>auto_configuration settings (reload if you alter this)
function FindProxyForURL(url, host) {
if (shExpMatch(host, "*example.local")) {
return "PROXY example.local";
}
return "DIRECT";
}
A: You can use echoipdns for this (https://github.com/zapty/echoipdns).
By running echoipdns local all requests for .local subdomains are redirected to 127.0.0.1, so any domain with xyz.local etc will resolve to 127.0.0.1. You can use any other suffix also just replace local with name you want.
Echoipdns is even more powerful, when you want to use your url from other machines in network you can still use it with zero configuration.
For e.g. If your machine ip address is 192.168.1.100 you could now use a domain name xyz.192-168-1-100.local which will always resolve to 192.168.1.100. This magic is done by the echoipdns by looking at the ip address in the second part of the domain name and returning the same ip address on DNS query. You will have to run the echoipdns on the machine from which you want to access the remote system.
echoipdns also can be setup as a standalone DNS proxy, so by just point to this DNS, you can now use all the above benefits without running a special command every time, and you can even use it from mobile devices.
So essentially this simplifies the wildcard domain based DNS development for local as well as team environment.
echoipdns works on Mac, Linux and Windows.
NOTE: I am author for echoipdns.
A: I could not find a prohibition in writing, but by convention, the Windows hosts file closely follows the UNIX hosts file, and you cannot put wildcard hostname references into that file.
If you read the man page, it says:
DESCRIPTION
The hosts file contains information regarding the known hosts on the net-
work. For each host a single line should be present with the following
information:
Internet address
Official host name
Aliases
Although it does say,
Host names may contain any printable character other than a field delim-
iter, newline, or comment character.
that is not true from a practical level.
Basically, the code that looks at the /etc/hosts file does not support a wildcard entry.
The workaround is to create all the entries in advance, maybe use a script to put a couple hundred entries at once.
A: While you can't add a wildcard like that, you could add the full list of sites that you need, at least for testing, that works well enough for me, in your hosts file, you just add:
127.0.0.1 site1.local
127.0.0.1 site2.local
127.0.0.1 site3.local
...
A: Configuration for nginx config auto subdomain with Acrylic DNS Proxy
auto.conf file for your nginx sites folder
server {
listen 80;
server_name ~^(?<branch>.*)\.example\.com;
root /var/www/html/$branch/public;
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_log /var/log/nginx/$branch.error.log error;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php-fpm:9000;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
}
Add to Acrylic hosts file 127.0.0.1 example.com *.example.com and restart Acrylic service.
$branch - your subdomain name.
Set instead of root /var/www/html/$branch/public; your project path
A: This can be done using Pi-Hole, just edit the "/etc/hosts" and restart dns service.
nano /etc/hosts
pihole restartdns
Example:
127.0.1.1 raspberrypi
192.168.1.1 w1.dev.net
192.168.1.2 w2.dev.net
192.168.1.3 w3.dev.net
A: You can use a dynamic DNS client such as http://www.no-ip.com. Then, with an external DNS server CNAME *.mydomain.com to mydomain.no-ip.com.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "321"
}
|
Q: How would you go about auto-detecting Textile versus Markdown? I'm considering supporting both Textile and Markdown on a current project. I would prefer not forcing users to choose one or the other. Is there a way to auto-detect which the user is using? How would you go about this? I'd like to find / develop both a JavaScript and a PHP solution so I can provide live previews as well as process the user input on the server-side.
A: Consider that users might only use one specific syntax element in a posting, so you'd have to check for everything. Looking for "h1." obviously only works if the user uses exactly that element.
It's pretty easy with things like headers, but consider that markdown formats *this* as <em>this</em> and Textile will convert that to <strong>this</strong> instead. So you'd have ambiguous syntax constructs that would yield different results in each language.
I'd suggest going with a user choice. Try to find out what syntax is generally preferred by your users (or you), offer an "use x instead of y" checkbox for those who want the other choice.
A: This really shouldn't be that hard. Markdown does not support the following syntax;
h1. Header
p. Paragraph
... so you simply scan for that to check if it is textile. Very simple regular expression to get you started (scans for lines beginning with hX. or p.) in PHP code:
if (preg_match('/^(p|h[1-6])\. /m', $subject))
{
// Successful match
} else
{
// Match attempt failed
}
You will probably be able to write your own regex to scan for Markdown.
A: Auto-detection, I don't know, both are based on "natural" typing.
Perhaps you can ask the user to choose a format, with a pair of radio-buttons or something.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: DotNetNuke vulnerabilities Anyone familiar with specific security issues in the current version of DotNetNuke?
(I've already checked out their site, securityfocus, etc...)
I've reopened the question, since my client developed their system using DotNetNuke - hence it is a programming question. I just need to know some issues regarding this platform.
A: DNN Vulnerability information will be at:
http://www.dotnetnuke.com/News/SecurityPolicy/tabid/940/Default.aspx
A: I'm not aware of any security issues that have been announced with the current version of DotNetNuke (4.9.0). The security policy of DotNetNuke is to address any known security issues as soon as they are discovered. They won't release a version with a known security issue.
A: I just want to add to this, that DotNetNuke corporation, right or wrong, asks that people not publicly discuss exploit details if known, as it exposes the wide community to greater risk.
Typically the rule of thumb with DNN is to upgrade to the most current version, and keep an eye on the security items posted on the site, also, keeping an eye on Cathal's blog is a good idea as he is the head security person.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Moving from ints to GUIDs as primary keys I use several referenced tables with integer primary keys. Now I want to change ints to GUIDs leaving all references intact. What is the easiest way to do it?
Thank you!
Addition
I do understand the process in general, so I need more detailed advices, for example, how to fill new GUID column. Using default value newid() is correct, but what for already existing rows?
A: Firstly: Dear God why?!?!?
Secondly, you're going to have to add the GUID column to all your tables first, then populate them based on the int value. Once done you can set the GUIDs to primary/foreign keys then drop the int columns.
To update the value you'd do something like
*
*Set the new GUIDs in the primary key table
*Run this:
.
UPDATE foreignTable f
SET f.guidCol = p.guidCol
FROM primaryTable p
WHERE p.intCol = f.intCol
A: This is relevent in a system that implements the distributed computing model. If the system is required to know the primary key at the time when you persist information in the system, the use of a auto-incrementing primary key maintained by ONE handler will slow down the system. Instead, you need a mechanism like a GUID generator to create primary key (keep in mind that the true feature of a primary key is its uniqueness). So, I can scale up with multiple services, each creating its primary key, independently of each other.
I had dubious privilege of doing this before and basically what I had to do was to export the whole damned database into XML. Next, I had a Java application that uses the java.util.Random's nextLong() function to replace the primary key with their new guid keys. After that I imported the whole thing back in to the database.
Of course, the first time I tried to import the XML files back, I forgot to turn off the auto-number feature of the primary key field, so do learn from my mistakes. I'm sure that there're better ways of doing it, but this was a fast and dirty way of doing it ... and it worked. In case you wondering, the project was to make the application scale.
A: Yeah, I'm with Glenn... I was actually hesitating on posting the same thing before he posted it....
Why would you not want an auto increment int primary key separate from your GUID? it's a lot more flexible, and you can just have the GUID column indexed so you have good performance on your queries...
As for the flexibility, I like to keep my id's as autoincrement ints because then the other seemingly unique and primary-key worthy item can change.
A great case of the flexibility is if you use usernames as a primary key. Even if they are unique, it is nice to be able to change them. What if users use an email address as their username? Being able to change the username and have it not affect all your queries is a big plus, and I suspect the same could be true with your GUIDs....
A: *
*Create a new column for the guid
value in the master table. Use the
uniqueidentifier data type, make it
not null with a newid() default so
all existing rows will be populated.
*Create new uniqueidentifier columns
in the child tables.
*Run update statements to build the guild relationships using the exisitng int relationships to reference the entities.
*Drop the original int columns.
In addition, leave some space in your data/index pages (specify fillfactor < 100) as guids are not sequential like int identity columns are. This means inserts can be anywhere in the data range and will cause page splits if your pages are 100% full.
A: I think, you must do it manualy. Or you can write some utility for it. The scenario should be:
*
*Duplicate the "int" PK/FK columns with new "guid" columns.
*Generates new values for "guid" PK columns.
*Update values in "guid" FK columns with specified values ( you find the records via "int" PK ).
*Remove references ( relations ) with "int" PK/FK columns.
*Create similar references ( relations ) with "guid" PK/FK columns.
*Remove "int" PK/FK columns.
A: It's a very good choice. I switched from longs to UUID for one of my applications and I don't regret it. If you use MS SQL Server it is included in standard (I use postgresql and it's only included in standard from 8.3 on).
Like mentioned by Glenn Slaven, you can recreate UUIDs from the keys you have in your current records. Be aware that they will not be unique though but that way it's easy to keep the relationships intact. New records you create after the move will be unique.
A: DON'T DO IT! We started out using GUIDs, and now we've almost finished moving to INTs as PKs; we're retaining the GUID for logging purposes (and for some tables of, er, "negotiable relational integrity" ;) ), but the speed increase of using ints has been phenomenal.
This only really became apparent when the table rowcounts crossed into millions, mind you.
Our biggest folly by far was using a NEWID() as the PK of our (sequential) log table - there was much head-smacking when we realised our error.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: In Visual Studio, when would I want to use the Test View? For managing unit tests in Visual Studio, I use the Test List Editor. There's also a Test View which looks similar but more limited. When would I want to use the Test View as opposed to the Test List Editor or any of the other test windows?
A: For me the test view is a nice compact list that fits better on screen(if i need it on screen at all) as i use a seperate portrait monitor for most of the vs tool windows.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Patch flex framework to show preloader instantly? In the Flex framework a custom preloader can be used while the site is loading.
In the Adobe docs it specifies that 'the progress bar [preloader] is displayed if less than half of the application is downloaded after 700 milliseconds of downloading.'
However I ALWAYS want the preloader to appear instantly since I know that 95% of our users are first time visitors and the site is over 500kb. I dont want people to have to wait .7 seconds for the preloader animation to appear.
I would think in theory that it is possible to 'monkey patch' the framework to remove this .7 second limitation. I dont have time to figure out how, and I've never done it before.
Anybody help?
A: You should just extend the DownloadProgressBar, try the following code. i've used this before and I've found jesse warden site click here usful for info on this (where I found out about it and this is a cut down version of his code)
package{
import flash.display.MovieClip;
import flash.display.Sprite;
import flash.events.Event;
import flash.events.ProgressEvent;
import mx.events.FlexEvent;
import mx.preloaders.DownloadProgressBar;
public class Preloader extends DownloadProgressBar
{
/**
* The Flash 8 MovieClip embedded as a Class.
*/
[Embed(source="yourPreloaderFile.swf")]
private var FlashPreloaderSymbol:Class;
private var clip:MovieClip;
public function Preloader()
{
super();
clip = new FlashPreloaderSymbol();
addChild(clip);
}
public override function set preloader(preloader:Sprite):void
{
preloader.addEventListener( FlexEvent.INIT_COMPLETE , onFlexInitComplete );
centerPreloader();
}
private function centerPreloader():void
{
x = (stageWidth / 2) - (clip.width / 2);
y = (stageHeight / 2) - (clip.height / 2);
}
private function onFlexInitComplete( event:FlexEvent ):void
{
dispatchEvent( new Event( Event.COMPLETE ) );
}
protected override function showDisplayForDownloading(time : int, event : ProgressEvent) : Boolean {
return true;
}
}
}
after that just change the preloader property in the main application tag to the Preloader class.
A: This is in mx.preloaders::DownloadProgressBar.as, line 1205 in the showDisplayForDownloading function.
Old school monkey-patching is out with AS3, but you can either edit the Flex source and compile yourself a new framework.swc (apparently a pain), or just include it in your source path (source paths override .swcs); or derive your own preloader class from DownloadProgressBar that just overrides showDisplayForDownloading and returns true.
You can find the framework source in '%PROGRAMFILES%\Adobe\Flex Builder 3[ Plug-in]\sdks\3.0.0\frameworks\projects\framework\src', then the package path. Change the sdk version if you are using 3.1, or whatever.
A: I'd guess that delay is there for two reasons:
*
*You don't want the preloader to
"blink" in once the page is already
cached
*The preloader itself has to
load
When I need to make absolutely sure a preloader is shown instantly I make a small wrapper swf that has just the preloader and load the main swf from there.
A: its not possible to make preloader show instantly , since some classes needs to be downloaded before progress can be displayed . other alternative can be that you display a progress in html and when flash movie is loaded it shows up but here .
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to generate a PDF from an embedded report definition without server or UI? Is it possible for a stand alone executable to generate a report and output it as PDF (or one of the other export options available from the report viewer) without displaying the ReportViewer control?
The report definition should be embedded in the executable and should not use the Reporting Services web service.
A: Actually you don't need a ReportViewer at all, you can directly instantiate and use a LocalReport:
LocalReport report = new LocalReport();
report.ReportPath = "templatepath";
// or use file from resource with report.ReportEmbeddedResource
// add parameters, datasource, etc.
Warning[] warnings;
string[] streamids;
string mimeType;
string encoding;
string filenameExtension;
byte[] bytes;
bytes =report.Render("PDF", null, out mimeType, out encoding, out filenameExtension, out streamids, out warnings);
// save byte[] to file with FileStream or something else
A: You don't have to show the control itself.
ReportViewer rv = new ReportViewer();
rv.LocalReport.ReportPath = "templatepath";
// or use file from resource with rv.LocalReport.ReportEmbeddedResource
// add parameters, datasource, etc.
Warning[] warnings;
string[] streamids;
string mimeType;
string encoding;
string filenameExtension;
byte[] bytes;
bytes = rv.LocalReport.Render("PDF", null, out mimeType, out encoding, out filenameExtension, out streamids, out warnings);
// save byte[] to file with FileStream or something else
However it can render only PDF and XLS (as ReportViewer control cannot export to Word and others as Reportig Service can).
I forgot to mention that the above code is C#, using .NET framework and ReportViewer control. Check out GotReportViewer for a quickstart.
A: Can you pass a .rdlc report directly to pdf with parameters? I have two dropdownlists that i pull my report with. I can't get the parameters to work when automatically exporting to pdf. Here is the error I get: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: One or more parameters required to run the report have not been specified.
The dropdownlists work when I use the reportviewer, but i want to skip this step. I can also get my data to go directly to a pdf if it doesn't have any parameters. My dropdownlists are called ddlyear and ddlmonth.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Squid Programming Does anybody know a good tutorial about Squid plug-in development?
A: There is one in the squid documentation. IIRC it's fairly straightforward - squid forks a process and passes data down a pipe to the process. A somewhat out-of-date but still relevant programmer's guide can be found Here.
A: Squid is nowadays a pretty modular and extensible platform, so it really depends on what you need to do. Most information can be found in the squid wiki (http://wiki.squid-cache.org)
There is the "helper" family of coprocesses-based extensions (http://wiki.squid-cache.org/Features/AddonHelpers)
*
*redirector helpers (http://wiki.squid-cache.org/Features/Redirectors)
*authentication helpers for Basic/NTLM/Digest/Negotiate/Kerberos etc
*authorization helpers for LDAP/Active Directory/Session/...
In case you are interested in content modification there is ICAP for out-of-process adaptation and eCAP for in-process adaptation.
I'm sorry I can't post direct links; you'll find the data you need by looking for the relevant keywords in the squid wiki.
A: You might also find the Add-On Helpers for Request Manipulation topic useful.
A: You can visit http://www.squid-cache.org/Versions/v3/3.4/cfgman/ to understand the squid completely...
I think the latest version 3.5 has arrived but this webpage can be a solution to your current problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: difference in speed between char and integer arrays? currently I'm dealing with a video processing software in which the picture data (8bit signed and unsigned) is stored in arrays of 16-aligned integers allocated as
__declspec(align(16)) int *pData = (__declspec(align(16)) int *)_mm_malloc(width*height*sizeof(int),16);
Generally, wouldn't it enable faster reading and writing if one used signed/unsigned char arrays like this?:
__declspec(align(16)) int *pData = (__declspec(align(16)) unsigned char *)_mm_malloc(width*height*sizeof(unsigned char),16);
I know little about cache line size and data transfer optimization, but at least I know that it is an issue. Beyond that, SSE will be used in future, and in that case char-arrays - unlike int arrays - are already in a packed format. So which version would be faster?
A: If you're planning to use SSE, storing the data in its native size (8-bit) is almost certainly a better choice, since loads of operations can be done without unpacking, and even if you need to unpack for pmaddwd or other similar instructions, its still faster because you have to load less data.
Even in scalar code, loading 8-bit or 16-bit values is no slower than loading 32-bit, since movzx/movsx is no different in speed from mov. So you just save memory, which surely can't hurt.
A: It really depends on your target CPU -- you should read up on its specs and run some benchmarks as everyone has already suggested. Many factors could influence performance. The first obvious one that comes to my mind is that your array of ints is 2 to 4 times larger than an array of chars and, hence, if the array is big enough, you'll get fewer data cache hits, which will definitely slow down the performance.
A: on the contrary, packing and unpacking is CPU commands expensive.
if you want to make a lot of a random pixel operations - it is faster to make it an array of int so that each pixel has its own address.
but if you iterate through your image sequencly you want to make a chars array so that it is small in size and reduces the chances to have a page fault (Especially for large images)
A: Char arrays can be slower in some cases. As a very general rule of thumb, the native word size is the best to go for, which will more than likely be 4-byte (32-bit) or 8-byte (64-bit). Even better is to have everything aligned to 16-bytes as you have already done... this will enable faster copies if you use SSE instructions (MOVNTA). If you are only concerned with moving items around this will have a much greater impact than the type used by the array...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to read the RGB value of a given pixel in Python? If I open an image with open("image.jpg"), how can I get the RGB values of a pixel assuming I have the coordinates of the pixel?
Then, how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value?
I would prefer if I didn't have to download any additional libraries.
A: photo = Image.open('IN.jpg') #your image
photo = photo.convert('RGB')
width = photo.size[0] #define W and H
height = photo.size[1]
for y in range(0, height): #each pixel has coordinates
row = ""
for x in range(0, width):
RGB = photo.getpixel((x,y))
R,G,B = RGB #now you can use the RGB value
A: Using a library called Pillow, you can make this into a function, for ease of use later in your program, and if you have to use it multiple times.
The function simply takes in the path of an image and the coordinates of the pixel you want to "grab." It opens the image, converts it to an RGB color space, and returns the R, G, and B of the requested pixel.
from PIL import Image
def rgb_of_pixel(img_path, x, y):
im = Image.open(img_path).convert('RGB')
r, g, b = im.getpixel((x, y))
a = (r, g, b)
return a
*Note: I was not the original author of this code; it was left without an explanation. As it is fairly easy to explain, I am simply providing said explanation, just in case someone down the line does not understand it.
A: Using Pillow (which works with Python 3.X as well as Python 2.7+), you can do the following:
from PIL import Image
im = Image.open('image.jpg', 'r')
width, height = im.size
pixel_values = list(im.getdata())
Now you have all pixel values. If it is RGB or another mode can be read by im.mode. Then you can get pixel (x, y) by:
pixel_values[width*y+x]
Alternatively, you can use Numpy and reshape the array:
>>> pixel_values = numpy.array(pixel_values).reshape((width, height, 3))
>>> x, y = 0, 1
>>> pixel_values[x][y]
[ 18 18 12]
A complete, simple to use solution is
# Third party modules
import numpy
from PIL import Image
def get_image(image_path):
"""Get a numpy array of an image so that one can access values[x][y]."""
image = Image.open(image_path, "r")
width, height = image.size
pixel_values = list(image.getdata())
if image.mode == "RGB":
channels = 3
elif image.mode == "L":
channels = 1
else:
print("Unknown mode: %s" % image.mode)
return None
pixel_values = numpy.array(pixel_values).reshape((width, height, channels))
return pixel_values
image = get_image("gradient.png")
print(image[0])
print(image.shape)
Smoke testing the code
You might be uncertain about the order of width / height / channel. For this reason I've created this gradient:
The image has a width of 100px and a height of 26px. It has a color gradient going from #ffaa00 (yellow) to #ffffff (white). The output is:
[[255 172 5]
[255 172 5]
[255 172 5]
[255 171 5]
[255 172 5]
[255 172 5]
[255 171 5]
[255 171 5]
[255 171 5]
[255 172 5]
[255 172 5]
[255 171 5]
[255 171 5]
[255 172 5]
[255 172 5]
[255 172 5]
[255 171 5]
[255 172 5]
[255 172 5]
[255 171 5]
[255 171 5]
[255 172 4]
[255 172 5]
[255 171 5]
[255 171 5]
[255 172 5]]
(100, 26, 3)
Things to note:
*
*The shape is (width, height, channels)
*The image[0], hence the first row, has 26 triples of the same color
A: Image manipulation is a complex topic, and it's best if you do use a library. I can recommend gdmodule which provides easy access to many different image formats from within Python.
A: There's a really good article on wiki.wxpython.org entitled Working With Images. The article mentions the possiblity of using wxWidgets (wxImage), PIL or PythonMagick. Personally, I've used PIL and wxWidgets and both make image manipulation fairly easy.
A: You can use pygame's surfarray module. This module has a 3d pixel array returning method called pixels3d(surface). I've shown usage below:
from pygame import surfarray, image, display
import pygame
import numpy #important to import
pygame.init()
image = image.load("myimagefile.jpg") #surface to render
resolution = (image.get_width(),image.get_height())
screen = display.set_mode(resolution) #create space for display
screen.blit(image, (0,0)) #superpose image on screen
display.flip()
surfarray.use_arraytype("numpy") #important!
screenpix = surfarray.pixels3d(image) #pixels in 3d array:
#[x][y][rgb]
for y in range(resolution[1]):
for x in range(resolution[0]):
for color in range(3):
screenpix[x][y][color] += 128
#reverting colors
screen.blit(surfarray.make_surface(screenpix), (0,0)) #superpose on screen
display.flip() #update display
while 1:
print finished
I hope been helpful. Last word: screen is locked for lifetime of screenpix.
A: You could use the Tkinter module, which is the standard Python interface to the Tk GUI toolkit and you don't need extra download. See https://docs.python.org/2/library/tkinter.html.
(For Python 3, Tkinter is renamed to tkinter)
Here is how to set RGB values:
#from http://tkinter.unpythonic.net/wiki/PhotoImage
from Tkinter import *
root = Tk()
def pixel(image, pos, color):
"""Place pixel at pos=(x,y) on image, with color=(r,g,b)."""
r,g,b = color
x,y = pos
image.put("#%02x%02x%02x" % (r,g,b), (y, x))
photo = PhotoImage(width=32, height=32)
pixel(photo, (16,16), (255,0,0)) # One lone pixel in the middle...
label = Label(root, image=photo)
label.grid()
root.mainloop()
And get RGB:
#from http://www.kosbie.net/cmu/spring-14/15-112/handouts/steganographyEncoder.py
def getRGB(image, x, y):
value = image.get(x, y)
return tuple(map(int, value.split(" ")))
A: It's probably best to use the Python Image Library to do this which I'm afraid is a separate download.
The easiest way to do what you want is via the load() method on the Image object which returns a pixel access object which you can manipulate like an array:
from PIL import Image
im = Image.open('dead_parrot.jpg') # Can be many different formats.
pix = im.load()
print im.size # Get the width and hight of the image for iterating over
print pix[x,y] # Get the RGBA Value of the a pixel of an image
pix[x,y] = value # Set the RGBA Value of the image (tuple)
im.save('alive_parrot.png') # Save the modified pixels as .png
Alternatively, look at ImageDraw which gives a much richer API for creating images.
A: PyPNG - lightweight PNG decoder/encoder
Although the question hints at JPG, I hope my answer will be useful to some people.
Here's how to read and write PNG pixels using PyPNG module:
import png, array
point = (2, 10) # coordinates of pixel to be painted red
reader = png.Reader(filename='image.png')
w, h, pixels, metadata = reader.read_flat()
pixel_byte_width = 4 if metadata['alpha'] else 3
pixel_position = point[0] + point[1] * w
new_pixel_value = (255, 0, 0, 0) if metadata['alpha'] else (255, 0, 0)
pixels[
pixel_position * pixel_byte_width :
(pixel_position + 1) * pixel_byte_width] = array.array('B', new_pixel_value)
output = open('image-with-red-dot.png', 'wb')
writer = png.Writer(w, h, **metadata)
writer.write_array(output, pixels)
output.close()
PyPNG is a single pure Python module less than 4000 lines long, including tests and comments.
PIL is a more comprehensive imaging library, but it's also significantly heavier.
A: install PIL using the command "sudo apt-get install python-imaging" and run the following program. It will print RGB values of the image. If the image is large redirect the output to a file using '>' later open the file to see RGB values
import PIL
import Image
FILENAME='fn.gif' #image can be in gif jpeg or png format
im=Image.open(FILENAME).convert('RGB')
pix=im.load()
w=im.size[0]
h=im.size[1]
for i in range(w):
for j in range(h):
print pix[i,j]
A: As Dave Webb said:
Here is my working code snippet printing the pixel colours from an
image:
import os, sys
import Image
im = Image.open("image.jpg")
x = 3
y = 4
pix = im.load()
print pix[x,y]
A: import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread('Cricket_ACT_official_logo.png')
imgplot = plt.imshow(img)
A: If you are looking to have three digits in the form of an RGB colour code, the following code should do just that.
i = Image.open(path)
pixels = i.load() # this is not a list, nor is it list()'able
width, height = i.size
all_pixels = []
for x in range(width):
for y in range(height):
cpixel = pixels[x, y]
all_pixels.append(cpixel)
This may work for you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184"
}
|
Q: Calling WMPLib.mediaCollection methods from ASP.NET on IIS return empty lists I am trying to access a Windows Media Player library from ASP.NET.
The following code:
WMPLib.WindowsMediaPlayer mplayer = new WMPLib.WindowsMediaPlayer();
WMPLib.IWMPStringCollection list = mplayer.mediaCollection.getAttributeStringCollection("artist", "audio");
Returns an non-empty list when run using the VS2005 development web server but an empty list when using IIS.
Setting impersonation with:
System.Security.Principal.WindowsImpersonationContext impersonationContext = ((System.Security.Principal.WindowsIdentity)User.Identity).Impersonate();
Doesn't help. It seems that WMPLib still doesn't thinks its running as a user who has a library.
Is there a way to get around this?
A: Have you tried configuration via web.config in ASP.NET? When you're running in the VS2005 debugger, you're (probably) running code as yourself, but when under IIS you'll be running it as IUSR_machinename or another low-permission system account.
Try adding something like this to your web.config file:
<system.web>
<identity impersonate="true" userName="MYDOMAIN\myuser" password="p@ssw0rd" />
</system.web>
No idea whether this works with Media Player specifically, but it works for other identity/security related problems like this.
A: I've run into a similar problem: the code works fine on my local machine, but once deployed on my home server, it can not pull anything out of the media library (I can open media player to verify there are songs in the library)
At first I thought it was a process issue as well, so I tried both setting the application pool to run under my own account, and to set it via the identity impersonate tags; neither resolved the issue.
I'm not sure of what other differences would cause the issue
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How would I go about taking a snapshot of a process to preserve its state for future investigation? Is this possible? Whether this is possible I don't know, but it would mighty useful!
I have a process that fails periodically (running in Windows 2000). I then have just one chance to react to it before having to restart it and painfully wait for it to fail again. I didn't write the process so don't have the source to debug. The failure is seemingly random.
With a snapshot of the process I could repeatedly and quickly test reactions to the failure.
I had thought of running inside a VM but this isn't possible in this instance.
EDIT:
@Jon Cage asked:
When you say a snapshot, you mean capturing a process when it's about to fail (including memory, program state etc. etc.) ...and then replaying it's final few seconds repeatedly to see what effect it has on some other component?
This is exactly what I mean!
A: I think minidump is what you are looking for.
You can also used Userdump:
The User Mode Process Dumper
(userdump) dumps any running Win32
processes memory image (including
system processes such as csrss.exe,
winlogon.exe, services.exe, etc) on
the fly, without attaching a debugger,
or terminating target processes.
Generated dump file can be analyzed or
debugged by using the standard
debugging tools.
This article shows you how to use it.
A: My best bet is to start the process in a debugger (OllyDbg being my preferred tool).
The process will pause on an exception, and you can try to figure out what happened shortly before that.
This needs some understanding of assembler and does not allow to create a snapshot of the process for later analysis. You would need to write your own debugger for that - it should be theoretically possible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Select Query in SQL + All the values in columns I am having a table Table1 with columns id1, id2, id3 all the columns are nullable
I may enter null or value to all columns in rows.
My question is I need to select the rows whose all the column values should not be null.
Thanks
There are totally around 300 columns in the table. I can't do the is null property for all the columns in where condition.
A: The answer to use a "function" to test the null-values is correct. The syntax depends on the database. If ISNULL() does not exist in your database then try:
SELECT * FROM Table1 WHERE id1 IS NOT NULL AND id2 IS NOT NULL AND id3 IS NOT NULL
And there is no way to short this down even if you have 300 fields in your table.
A: Don't understand why this question is getting negitive reviews - This question can be extended to people who inherited a large table from a non-programmer in a community (I know from previous experience), and likewise if the table is unknown. To downgrade this because its '300' columns is pointless IMO.
A: Best bet is to either rethink the design of your tables, splitting them if required.
otherwise best bet is do it progmatically - grab the table metadata, itterate through the columns and drynamically create the SQL from there. Most coding languages have access to the tables metadata, failing that a second SQL is required for it.
But, best bet is to think how can I design the table better.
A: You need to do this:
SELECT *
FROM yourtable
WHERE
column1 IS NOT NULL
AND column2 IS NOT NULL
AND column3 IS NOT NULL
AND ....
A: Are you saying you want to select the rows where none of the columns are null?
SELECT id1, id2, id3
FROM Table1
WHERE id1 IS NOT NULL AND id2 IS NOT NULL AND id3 IS NOT NULL
A: Sorry - I might be being a bit thick here. You're trying to get back the rows that have got SOMETHING in one of the columns (other than the id column)?
Can't you do;
create vw_View_Fields1to5 as
select id from employees
where name is not null or description is not null or field3 is not null
or field4 is not null or field5 is not null;
create vw_View_Fields6to10 as
select id from employees
where field6 is not null or field7 is not null or field8 is not null
or field 9 is not null or field10 is not null;
(etc)
select id from vw_View_Fields1to5
union
select id from vw_View_Fields6to10 .... (etc)
You'd have to take a DISTINCT or something to cut down the rows that fall into more than one view, of course.
If you want the rows back that have NOTHING in any column other than id, you'd switch 'or blah is not null' to be 'and blah is null' (etc).
Does that make sense... or am I missing something? :-)
EDIT: Actually, I believe the UNION process will only bring back distinct rows anyway (as opposed to UNION ALL), but I could be wrong - I haven't actually tried this.... (yet!)
A: you can try CLR stored procedure (if you're using SQL Server) or move this logic to the other layer of your application using C# or whatever language you're using.
another option is to create the query dynamically, concatenating your WHERE clause and EXECute your dynamically generated query.
A: Are you just reading the data, or will you want to try and update the rows in question?
I'm just wondering if there's something you can go by making a half-dozen views, each one based on say 50 columns being NOT NULL, and then linking them with some kind of EXISTS or UNION statement?
Can you tell us a bit more about what you want to do with your result set?
A: For the first time whatever Georgi or engram or robsoft is the way. However for subsequent stuff you can if possible alter the table and add one more column, called CSELECTFLAG, and initially updated this to Y for all columns that have values and N for others. Everytime there is an insert this needs to be updated. This would help make your subsequent queries faster and easier.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What are hashtables and hashmaps and their typical use cases? I have recently run across these terms few times but I am quite confused how they work and when they are usualy implemented?
A: Well, think of it this way.
If you use an array, a simple index-based data structure, and fill it up with random stuff, finding a particular entry gets to be a more and more expensive operation as you fill it with data, since you basically have to start searching from one end toward the other, until you find the one you want.
If you want to get faster access to data, you typicall resort to sorting the array and using a binary search. This, however, while increasing the speed of looking up an existing value, makes inserting new values slow, as you need to move existing elements around when you need to insert an element in the middle.
A hashtable, on the other hand, has an associated function that takes an entry, and reduces it to a number, a hash-key. This number is then used as an index into the array, and this is where you store the entry.
A hashtable revolves around an array, which initially starts out empty. Empty does not mean zero length, the array starts out with a size, but all the elements in the array contains nothing.
Each element has two properties, data, and a key that identifies the data. For instance, a list of zip-codes of the US would be a zip-code -> name type of association. The function reduces the key, but does not consider the data.
So when you insert something into the hashtable, the function reduces the key to a number, which is used as an index into this (empty) array, and this is where you store the data, both the key, and the associated data.
Then, later, you want to find a particular entry that you know the key for, so you run the key through the same function, get its hash-key, and goes to that particular place in the hashtable and retrieves the data there.
The theory goes that the function that reduces your key to a hash-key, that number, is computationally much cheaper than the linear search.
A typical hashtable does not have an infinite number of elements available for storage, so the number is typically reduced further down to an index which fits into the size of the array. One way to do this is to simply take the modulus of the index compared to the size of the array. For an array with a size of 10, index 0-9 will map directly to an index, and index 10-19 will map down to 0-9 again, and so on.
Some keys will be reduced to the same index as an existing entry in the hashtable. At this point the actual keys are compared directly, with all the rules associated with comparing the data types of the key (ie. normal string comparison for instance). If there is a complete match, you either disregard the new data (it already exists) or you overwrite (you replace the old data for that key), or you add it (multi-valued hashtable). If there is no match, which means that though the hash keys was identical, the actual keys were not, you typically find a new location to store that key+data in.
Collision resolution has many implementations, and the simplest one is to just go to the next empty element in the array. This simple solution has other problems though, so finding the right resolution algorithm is also a good excercise for hashtables.
Hashtables can also grow, if they fill up completely (or close to), and this is usually done by creating a new array of the new size, and calculating all the indexes once more, and placing the items into the new array in their new locations.
The function that reduces the key to a number does not produce a linear value, ie. "AAA" becomes 1, then "AAB" becomes 2, so the hashtable is not sorted by any typical value.
There is a good wikipedia article available on the subject as well, here.
A: lassevk's answer is very good, but might contain a little too much detail. Here is the executive summary. I am intentionally omitting certain relevant information which you can safely ignore 99% of the time.
There is no important difference between hash tables and hash maps 99% of the time.
Hash tables are magic
Seriously. Its a magic data structure which all but guarantees three things. (There are exceptions. You can largely ignore them, although learning them someday might be useful for you.)
1) Everything in the hash table is part of a pair -- there is a key and a value. You put in and get out data by specifying the key you are operating on.
2) If you are doing anything by a single key on a hash table, it is blazingly fast. This implies that put(key,value), get(key), contains(key), and remove(key) are all really fast.
3) Generic hash tables fail at doing anything not listed in #2! (By "fail", we mean they are blazingly slow.)
When do we use hash tables?
We use hash tables when their magic fits our problem.
For example, caching frequently ends up using a hash table -- for example, let's say we have 45,000 students in a university and some process needs to hold on to records for all of them. If you routinely refer to student by ID number, then a ID => student cache makes excellent sense. The operation you are optimizing for this cache is fast lookup.
Hashes are also extraordinarily useful for storing relationships between data when you don't want to go whole hog and alter the objects themselves. For example, during course registration, it might be a good idea to be able to relate students to the classes they are taking. However, for whatever reason you might not want the Student object itself to know about that. Use a studentToClassRegistration hash and keep it around while you do whatever it is you need to do.
They also make a fairly good first choice for a data structure except when you need to do one of the following:
When Not To Use Hash Tables
Iterate over the elements. Hash tables typically do not do iteration very well. (Generic ones, that is. Particular implementations sometimes contain linked lists which are used to make iterating over them suck less. For example, in Java, LinkedHashMap lets you iterate over keys or values quickly.)
Sorting. If you can't iterate, sorting is a royal pain, too.
Going from value to key. Use two hash tables. Trust me, I just saved you a lot of pain.
A: if you are talking in terms of Java, both are collections which allow objects addition, deletion and updation and use Hasing algorithms internally.
The significant difference however, if we talk in reference to Java, is that hashtables are inherently synchronized and hence are thread safe while the hash maps are not thread safe collection.
Apart from the synchronization, the internal mechanism to store and retrieve objects is hashing in both the cases.
If you need to see how Hashing works, I would recommend a bit of googling on Data Structers and hashing techniques.
A: Hashtables/hashmaps associate a value (called 'key' for disambiguation purposes) with another value. You can think them as kind of a dictionary (word: definition) or a database record (key: data).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Is the hash of a GUID unique? I create a GUID (as a string) and get the hash of it. Can I consider this hash to be unique?
A: In a word, no.
Let's assume that your hash has fewer bits than the GUID, by the pigeon hole principle, there must exist more than one mapping of some GUID -> hash simply because there are fewer hashes than GUIDS.
If we assume that the hash has a larger number of bits than the GUID, there is a very small--but finite--chance of a collision, assuming you're using a good hash function.
A: No hash function that reduces an arbitrary sized data block to a fixed size number of bits will produce a 1-to-1 mapping between the two. There will always exist a chance of having two different data blocks be reduced to the same sequence of bits in the hash.
Good hash algorithms minimizes the likelihood of this happening, and generally, the more bits in the hash, the less chance of a collision.
A: Not as reliably unique as the GUID itself, no.
Just to expand, you are reducing your uniqueness by a factor of 4, going from 16 bytes to 4 bytes of possible combinations.
As pointed out in the comments the hash size will make a difference. The 4 byte thing was an assumption, horrible at best I know, that it may be used in .NET, where the default hash size is 4 bytes (int). So you can replace what I said above with whatever byte size your hash may be.
A: It's not guranteed to be, due to hash collisions. The GUID itself is almost-guaranteed to be.
For practical reasons you probably can assume that a hash is unique, but why not use the GUID itself?
A: No, and I wouldn't assume uniqueness of any hash value. That shouldn't matter because hash values don't need to unique, they just need to evenly distributed across their range. The more even the distribution, the fewer collisions occur (in the hashtable). Fewer collisions mean better hashtable performance.
fyi For a good description of how hash tables work, read the accepted answer to What are hashtables and hashmaps and their typical use cases?
A: Nope.
See here, if you want a mini GUID: https://devblogs.microsoft.com/oldnewthing/20080627-00/?p=21823
A: If you use cryptographic hash (MD5, SHA1, RIPEMD160), the hash will be unique (modulo collisions which are very improbable -- SHA1 is used e.g. for digital signatures, and MD5 is also collision-resistant on random inputs). Though, why do you want to hash a GUID?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: How can I improve CVS performance in a multi-site scenario? I am currently working in a project with developers working on three sites. One of the sites is in Delhi, India while the other two are in Europe. The communication between the European offices and the office in Delhi has poor bandwidth and quite bad latency, and a CVS update from there often takes 5-10 minutes even though only a few files have changed.
Is there any good CVS proxies out there, or any neat way of keeping separate CVS servers in sync? Do you have any other tips on how the performance can be improved in this scenario?
Unfortunately, we will be stuck with CVS during the scope of this project, so switching to something completely different is not an option in the short-term.
A: Why isn't switching an option?
I would highly recommend to switch to SVN or git instead. And that you do this conversion as soon as possible... like today :)
There are even svn tools/guides and git tools/guides out there that will do this CVS conversion for you.
I personally use and love SVN for my work, but based on your above description, it sounds like git might be the better option for you.
A: Here is what I have done a long time ago in similar circumstances when bandwidth and unreliable networks were an issue:
*
*Make a copy of the repository and install it in the remote location. You know have CVS1 and CVS2.
*Lock one of the two copies (CVS1) by preventing commits. This is achieved by modifying 'commitinfo' in the CVSROOT module.
*CVS1 can be used for updates only
*CVS2 can be used for updates and commit
When you want to give access in commit to copy 1, proceed as follows:
*
*Lock CVS2
*Copy CVS2 to CVS1
*Unlock CVS1
This sounds cumbersome, and it is if you do it manually, but it works. It requires a bit of discipline if you do it manually. May be timezones are on your side for once.
I wrote a tool to keep track of who had the commit token and to transfer repositories from one site to another automatically via rsync and SSH. It worked nicely for a couple of years. We never lost any data and it took about 5 minutes to transfer the token from one location to another.
The transfer tool was written in perl and it took me about two weeks to develop it, working on it full time.
I know that a long time ago FreeBSD developers used CVSup but I never used that tool myself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: For your semantic web type application, do you use RDF or a proprietary model for the internal representation? If you've worked on a project that stores data for use with the semantic web, do you use RDF or even OWL as the internal data format or do you have your own data model/ontology that you map to RDF for interoperability?
If you use RDF, what are your experiences with implementing various things like cyclical class hierachies?
If you do your own thing, how does it differ from RDF/OWL?
A: I work alongside the Jena team at HP (indeed, have contributed to Jena myself), so using Jena is a fairly straightforward choice in our team. However, there are more reasons than just "next-bench" convenience. The various standards for the semantic web contain quite a bit of detail and complexity, and getting that right isn't an easy thing to do by yourself. I've come across a number of downloadable ontologies and other datasets that, for example, don't conform to the IRI spec. In an entirely self-contained application it probably doesn't matter too much if you cut corners against the standards, but in that case you need to ask why you are using semantic web techniques in the first place. For me, a strong value in the semweb approach for an application would be data-interop and open data linking, in which case standards conformance is pretty central.
Most of my data is in a triple store, but I do use custom tables as indexes for commonly asked queries. If you know the query pattern ahead of time, a well-indexed table in a good db engine is going to be hard to beat for a generic schemaless triple store.
Obviously, one drawback to using Jena is that it's Java specific. I do use Jena with jruby, but I'm looking forwards to a good native Ruby RDF library (work is underway). I'd also like more complete RDF/OWL support in Javascript and Flex for when we're doing complex rich client interfaces.
Ian
A: I'm currently working on some really small projects in this area and I "mostly" use RDF there, although for parsing purposes I use a simple URI-registry in order to avoid cycles in the data structure itself. Although, I have to say that I'm still in the conceptual stage of these projects. In the end I rely mostly on 3rd-party storage backends like Jena, rdflib et al.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Delphi 2009 and Informix dbExpress with Windows 2003 I have simple application that uses dbExpress to connect Informix database.
If I compile it with TurboDelphi it works on both WinXP and Win2003.
When I use new Deplhi 2009 my app works ok on WinXP but do not start on Win2003.
No MessageBox with error, only info in EventLog:
Faulting application inf_dbexpress_test.exe, version 0.0.0.0, faulting
module kernel32.dll, version 5.2.3790.4062, fault address 0x0000bee7.
I think this is problem with dbExpress driver while my other app compiled with Delhi 2009 that uses
ODBC to connect Informix works on Win2003.
Anybody can test if Informix dbExpress drivers from Delphi 2009 works with Windows 2003?
Thanks for your help, but it do not work.
As for $INFORMIXDIR:
I reinstalled ClientSDK 3.5 and my $INFORMIXDIR is now:
c:\informix
(was c:\Program Files ...).
I can connect to DB from my app that uses ODBC.
But dbExpress app can not start, even in WinXP compatibility mode.
I have reported it in Delphi Quality Central as bug #67823:
A: Which turbo version do you have (the .net version or the Win32 version)? Do you have .net framework installed on 2003?
A: Hm maybe a strange suggestion, but one of the big differences introduces in Delphi 2009 is Unicode. All strings are now Unicode strings. Could that possibly be an explanation for the problem?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How can use SQLBulkCopy on a table with a GUID primary key and default newsequentialid()? When using SQLBulkCopy on a table with a GUID primary key and default newsequentialid()
e.g
CREATE TABLE [dbo].[MyTable](
[MyPrimaryKey] [uniqueidentifier] NOT NULL CONSTRAINT [MyConstraint] DEFAULT (newsequentialid()),
[Status] [int] NULL,
[Priority] [int] NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY NONCLUSTERED
(
[MyPrimaryKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
wIth the C# code
tran = connection.BeginTransaction();
SqlBulkCopy sqlCopy = new SqlBulkCopy(connection,SqlBulkCopyOptions.Default, tran);
sqlCopy.DestinationTableName = "MyTable";
sqlCopy.WriteToServer(dataTable);
Gives you an error...
Column 'MyPrimaryKey' does not allow DBNull.Value
I've tried fiddling the the SqlBulkCopyOptions. The only thing that works is setting the MyPrimaryKey field to allow nulls and removing the primary key.
Anyone know if there is a workaround for this issue?
Or can you verify that there is no workaround (other than changing the table structure)?
A: You need to set up the column mappings. First call
sqlCopy.ColumnMappings.Clear();
Then call
sqlBulkCopy.ColumnMappings.Add("Status", "Status");
sqlBulkCopy.ColumnMappings.Add("Priority", "Priority");
This means the bulk copy will stop trying to insert into MyPrimaryKey column and will only insert into the status and Priority columns.
A: You're only options are to remove the MyPrimaryKey field from the data that is being loaded or to modify the table structure.
With the field being there with no values you are telling SQL that you want to force a null into the field, which, obviously, is not allowed.
A: Removing the database generated columns from the column set before writing is what you need to do.
We use LINQ-to-SQL for most of our database operations, but use another method for inserting many records at once, as L2S is a bit slow for this.
We have a generic method called BulkInsertAll<> that we can use on any table, which uses SqlBulkCopy internally. We dynamically generate the columns using reflection based on the generic type's properties. The ColumnAttribute is found in the .cs file generated from our .dbml file, where we have specified the guid primary key column as IsDbGenerated="true".
public void BulkInsertAll<T>( IEnumerable<T> entities ) {
entities = entities.ToArray();
string cs = Connection.ConnectionString;
var conn = new SqlConnection( cs );
conn.Open();
Type t = typeof( T );
var tableAttribute = (TableAttribute) t.GetCustomAttributes(
typeof( TableAttribute ), false
).Single();
var bulkCopy = new SqlBulkCopy( conn ) {
DestinationTableName = tableAttribute.Name
};
var properties = t.GetProperties().Where( EventTypeFilter );
// This will prevent the bulk insert from attempting to update DBGenerated columns
// Without, inserts with a guid pk will fail to get the generated sequential id
// If uninitialized guids are passed to the DB, it will throw duplicate key exceptions
properties = properties.Where(
x => !x.GetCustomAttributes( typeof( ColumnAttribute ), false )
.Cast<ColumnAttribute>().Any( attr => attr.IsDbGenerated )
);
var table = new DataTable();
foreach( var property in properties ) {
Type propertyType = property.PropertyType;
if( propertyType.IsGenericType &&
propertyType.GetGenericTypeDefinition() == typeof( Nullable<> ) ) {
propertyType = Nullable.GetUnderlyingType( propertyType );
}
table.Columns.Add( new DataColumn( property.Name, propertyType ) );
}
foreach( var entity in entities ) {
table.Rows.Add(
properties.Select(
property => GetPropertyValue( property.GetValue( entity, null ) )
).ToArray()
);
}
//specify the mapping for SqlBulk Upload
foreach( var col in properties ) {
bulkCopy.ColumnMappings.Add( col.Name, col.Name );
}
bulkCopy.WriteToServer( table );
conn.Close();
}
private bool EventTypeFilter( System.Reflection.PropertyInfo p ) {
var attribute = Attribute.GetCustomAttribute( p,
typeof( AssociationAttribute ) ) as AssociationAttribute;
if( attribute == null ) return true;
if( attribute.IsForeignKey == false ) return true;
return false;
}
private object GetPropertyValue( object o ) {
if( o == null )
return DBNull.Value;
return o;
}
And this works just fine. The entities won't be updated with the newly assigned Guid, so you'll have to make another query to get those, but the new rows have property generated guids in the database.
We could wrap that .Where filter into the EventTypeFilter method, but I'm not the one who wrote most of this, and I haven't gone through it to tune everything up.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Config Files for Biztalk Host Processes A single Biztalk Server can have multiple Host processes. Is it possible to create an application config file for each host process? For example I would like to use Unity or log4net or whatever which needs such a configuration file.
Edit: Thanks at David Hall. To elaborate a bit more:
We have 12 Biztalk Servers in a group each running between 5 and 10 host processes. Some things the host processes run are unique to each process, but they also share a lot of code on the library level. The trigger for my question was the need to configure for example trace levels for the one system part (equivalent to host process) that currently gives trouble.
As an alternative it would help if I could figure out in which host process the current code is running, but I'll post that to a different question.
A: If I interpret your question correctly, you want to be able to have a separate version of the BTSNTSvs.exe.config file for each host instance?
So as well as the BizTalkServerApplication host instance, you have YourHostInstance host instances that you want to have a separate config for?
I don't 100% know that you cannot do this, but I am almost sure that it is not possible.
The reasons I'm fairly sure this isn't possible are:
*
*The BTSNTSvc.exe.config file attaches to the main executable BTSNTSvc.exe
*Config changes placed in BTSNTSvc.exe.config apply to all host instance regardless of their names.
I've just flipped through the BizTalk books I have to hand as well as some of the good web resources and can't find any mention of someone doing what you want.
So as far as I know, you will need to put the config settings for things like log4net into the BTSNTSvc.exe.config file, and have them the same for each host instance.
One way to get close to what you want would be to load application specific settings from the rules engine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is it possible to manipulate an SVG document embedded in an HTML doc with JavaScript? I have made a SVG image, or more like mini application, for viewing graphs of data. I want to include this in a HTML page, and call methods on the SVG image.
Example:
<object id="img" data="image.svg" width="500" height="300"/>
<script>document.getElementById("img").addData([1,23,4]);</script>
Is it at all possible to call methods on the SVG document? If so, how do I declare the methods to expose in the SVG file, and how do I call them from the HTML document?
A: Things are actually simpler than you expect. You do not really need to read convoluted tutorial to understand the concept, neither do you have to use JQuery. Here is the basic layout:
*
*A JavaScript function in your html document.
<script type="text/javascript">
function change(){
var s=document.getElementById("cube");
s.setAttribute("stroke","0000FF");
}
</script>
*An SVG element that we are trying to manipulate.
<svg width=100 height=100 style='float: left;'>
<rect x="10" y="10" width="60" height="60" id="cube" onclick="change()" stroke=#F53F0C stroke-width=10 fill=#F5C60C />
</svg>
*An inline Button that would trigger the change. Notice that in my example the event can also be triggered by clicking on the cube itself.
<button onclick="change()">Click</button>
A: A few years ago, I was asked to create a 2-player Ajax-based game using SVG. It may not be precisely the solution you're looking for, but it may help you listen for events in your SVG. Here's the SVG controller:
fyi, the SVG was being dragged and dropped (it was Stratego)
/****************** Track and handle SVG object movement *************/
var svgDoc;
var svgRoot;
var mover=''; //keeps track of what I'm dragging
///start function////
//do this onload
function start(evt){
//set up the svg document elements
svgDoc=evt.target.ownerDocument;
svgRoot=svgDoc.documentElement;
//add the mousemove event to the whole thing
svgRoot.addEventListener('mousemove',go,false);
//do this when the mouse is released
svgRoot.addEventListener('mouseup',releaseMouse,false);
}
// set the id of the target to drag
function setMove(id){ mover=id; }
// clear the id of the dragging object
function releaseMouse(){
if(allowMoves == true){ sendMove(mover); }
mover='';
}
// this is launched every mousemove on the doc
// if we are dragging something, move it
function go(evt){
if(mover != '' && allowMoves != false) {
//init it
var me=document.getElementById(mover);
//actually change the location
moveX = evt.clientX-135; //css positioning minus 1/2 the width of the piece
moveY = evt.clientY-65;
me.setAttributeNS(null, 'x', evt.clientX-135);
me.setAttributeNS(null, 'y', evt.clientY-65);
}
}
function moveThis(pieceID, x, y) {
$(pieceID).setAttributeNS(null, 'x', x);
$(pieceID).setAttributeNS(null, 'y', y);
}
My app was pure SVG + JavaScript, but this is the gist of it.
A: I would reference Dr. David Dailey as the most awesome SVG / JS info you will find
http://srufaculty.sru.edu/david.dailey/svg/
A: Solution:
in svg:
<script>document.method = function() {}</script>
in html (using prototype to add event listeners):
<script>$("img").observe("load", function() {$("img").contentDocument.method()});
You need to listen to the load event on the image. Once the image is loaded, you can use the element.contentDocument to access the document variable on the svg document. Any methods added to that, will be available.
A: I have explored the svg by JavaScripts. See the blog: Scaling SVG Graphics with JavaScripts
A: Also see the jQuery SVG plugin
A: For support in IE6, have a look at SVGWeb.
There are examples on how to manipulate SVG with JavaScript in the sample code supplied with the library.
There is also a fair amount of information in the archives of the mailing list.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: How do I turn off CSRF protection in a rails app? The CSRF prevention built in to Rails is causing some problems for some automated load testing we are doing, and I want to turn it off for the duration of the process. How do I do this?
A: I love simple questions with clear answers.
#I go in application.rb
self.allow_forgery_protection = false
If you want to do this for testing only you can move that into one of the environment files (obviously, you'll be touching Application then rather than self). You could also write something like:
#I still go in application.rb
self.allow_forgery_protection = false unless ENV["RAILS_ENV"] == "production"
See here for details. (Continuing Rails' wonderful tradition of having documentation of core features in 2 year old blog posts, which were distilled from commit logs.)
A: In Rails 3, remove the protect_from_forgery command in app/controllers/application_controller.rb
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: How to extract img src, title and alt from html using php? I would like to create a page where all images which reside on my website are listed with title and alternative representation.
I already wrote me a little program to find and load all HTML files, but now I am stuck at how to extract src, title and alt from this HTML:
<img src="/image/fluffybunny.jpg" title="Harvey the bunny" alt="a cute little fluffy bunny" />
I guess this should be done with some regex, but since the order of the tags may vary, and I need all of them, I don't really know how to parse this in an elegant way (I could do it the hard char by char way, but that's painful).
A: I used preg_match to do it.
In my case, I had a string containing exactly one <img> tag (and no other markup) that I got from Wordpress and I was trying to get the src attribute so I could run it through timthumb.
// get the featured image
$image = get_the_post_thumbnail($photos[$i]->ID);
// get the src for that image
$pattern = '/src="([^"]*)"/';
preg_match($pattern, $image, $matches);
$src = $matches[1];
unset($matches);
In the pattern to grab the title or the alt, you could simply use $pattern = '/title="([^"]*)"/'; to grab the title or $pattern = '/title="([^"]*)"/'; to grab the alt. Sadly, my regex isn't good enough to grab all three (alt/title/src) with one pass though.
A: Just to give a small example of using PHP's XML functionality for the task:
$doc=new DOMDocument();
$doc->loadHTML("<html><body>Test<br><img src=\"myimage.jpg\" title=\"title\" alt=\"alt\"></body></html>");
$xml=simplexml_import_dom($doc); // just to make xpath more simple
$images=$xml->xpath('//img');
foreach ($images as $img) {
echo $img['src'] . ' ' . $img['alt'] . ' ' . $img['title'];
}
I did use the DOMDocument::loadHTML() method because this method can cope with HTML-syntax and does not force the input document to be XHTML. Strictly speaking the conversion to a SimpleXMLElement is not necessary - it just makes using xpath and the xpath results more simple.
A: You may use simplehtmldom. Most of the jQuery selectors are supported in simplehtmldom. An example is given below
// Create DOM from URL or file
$html = file_get_html('http://www.google.com/');
// Find all images
foreach($html->find('img') as $element)
echo $element->src . '<br>';
// Find all links
foreach($html->find('a') as $element)
echo $element->href . '<br>';
A: The script must be edited like this
foreach( $result[0] as $img_tag)
because preg_match_all return array of arrays
A: $url="http://example.com";
$html = file_get_contents($url);
$doc = new DOMDocument();
@$doc->loadHTML($html);
$tags = $doc->getElementsByTagName('img');
foreach ($tags as $tag) {
echo $tag->getAttribute('src');
}
A: EDIT : now that I know better
Using regexp to solve this kind of problem is a bad idea and will likely lead in unmaintainable and unreliable code. Better use an HTML parser.
Solution With regexp
In that case it's better to split the process into two parts :
*
*get all the img tag
*extract their metadata
I will assume your doc is not xHTML strict so you can't use an XML parser. E.G. with this web page source code :
/* preg_match_all match the regexp in all the $html string and output everything as
an array in $result. "i" option is used to make it case insensitive */
preg_match_all('/<img[^>]+>/i',$html, $result);
print_r($result);
Array
(
[0] => Array
(
[0] => <img src="/Content/Img/stackoverflow-logo-250.png" width="250" height="70" alt="logo link to homepage" />
[1] => <img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />
[2] => <img class="vote-down" src="/content/img/vote-arrow-down.png" alt="vote down" title="This was not helpful (click again to undo)" />
[3] => <img src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG" height=32 width=32 alt="gravatar image" />
[4] => <img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />
[...]
)
)
Then we get all the img tag attributes with a loop :
$img = array();
foreach( $result as $img_tag)
{
preg_match_all('/(alt|title|src)=("[^"]*")/i',$img_tag, $img[$img_tag]);
}
print_r($img);
Array
(
[<img src="/Content/Img/stackoverflow-logo-250.png" width="250" height="70" alt="logo link to homepage" />] => Array
(
[0] => Array
(
[0] => src="/Content/Img/stackoverflow-logo-250.png"
[1] => alt="logo link to homepage"
)
[1] => Array
(
[0] => src
[1] => alt
)
[2] => Array
(
[0] => "/Content/Img/stackoverflow-logo-250.png"
[1] => "logo link to homepage"
)
)
[<img class="vote-up" src="/content/img/vote-arrow-up.png" alt="vote up" title="This was helpful (click again to undo)" />] => Array
(
[0] => Array
(
[0] => src="/content/img/vote-arrow-up.png"
[1] => alt="vote up"
[2] => title="This was helpful (click again to undo)"
)
[1] => Array
(
[0] => src
[1] => alt
[2] => title
)
[2] => Array
(
[0] => "/content/img/vote-arrow-up.png"
[1] => "vote up"
[2] => "This was helpful (click again to undo)"
)
)
[<img class="vote-down" src="/content/img/vote-arrow-down.png" alt="vote down" title="This was not helpful (click again to undo)" />] => Array
(
[0] => Array
(
[0] => src="/content/img/vote-arrow-down.png"
[1] => alt="vote down"
[2] => title="This was not helpful (click again to undo)"
)
[1] => Array
(
[0] => src
[1] => alt
[2] => title
)
[2] => Array
(
[0] => "/content/img/vote-arrow-down.png"
[1] => "vote down"
[2] => "This was not helpful (click again to undo)"
)
)
[<img src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG" height=32 width=32 alt="gravatar image" />] => Array
(
[0] => Array
(
[0] => src="http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG"
[1] => alt="gravatar image"
)
[1] => Array
(
[0] => src
[1] => alt
)
[2] => Array
(
[0] => "http://www.gravatar.com/avatar/df299babc56f0a79678e567e87a09c31?s=32&d=identicon&r=PG"
[1] => "gravatar image"
)
)
[..]
)
)
Regexps are CPU intensive so you may want to cache this page. If you have no cache system, you can tweak your own by using ob_start and loading / saving from a text file.
How does this stuff work ?
First, we use preg_ match_ all, a function that gets every string matching the pattern and ouput it in it's third parameter.
The regexps :
<img[^>]+>
We apply it on all html web pages. It can be read as every string that starts with "<img", contains non ">" char and ends with a >.
(alt|title|src)=("[^"]*")
We apply it successively on each img tag. It can be read as every string starting with "alt", "title" or "src", then a "=", then a ' " ', a bunch of stuff that are not ' " ' and ends with a ' " '. Isolate the sub-strings between ().
Finally, every time you want to deal with regexps, it handy to have good tools to quickly test them. Check this online regexp tester.
EDIT : answer to the first comment.
It's true that I did not think about the (hopefully few) people using single quotes.
Well, if you use only ', just replace all the " by '.
If you mix both. First you should slap yourself :-), then try to use ("|') instead or " and [^ø] to replace [^"].
A: I have read the many comments on this page that complain that using a dom parser is unnecessary overhead. Well, it may be more expensive than a mere regex call, but the OP has stated that there is no control over the order of the attributes in the img tags. This fact leads to unnecessary regex pattern convolution. Beyond that, using a dom parser provides the additional benefits of readability, maintainability, and dom-awareness (regex is not dom-aware).
I love regex and I answer lots of regex questions, but when dealing with valid HTML there is seldom a good reason to regex over a parser.
In the demonstration below, see how easy and clean DOMDocument handles img tag attributes in any order with a mixture of quoting (and no quoting at all). Also notice that tags without a targeted attribute are not disruptive at all -- an empty string is provided as a value.
Code: (Demo)
$test = <<<HTML
<img src="/image/fluffybunny.jpg" title="Harvey the bunny" alt="a cute little fluffy bunny" />
<img src='/image/pricklycactus.jpg' title='Roger the cactus' alt='a big green prickly cactus' />
<p>This is irrelevant text.</p>
<img alt="an annoying white cockatoo" title="Polly the cockatoo" src="/image/noisycockatoo.jpg">
<img title=something src=somethingelse>
HTML;
libxml_use_internal_errors(true); // silences/forgives complaints from the parser (remove to see what is generated)
$dom = new DOMDocument();
$dom->loadHTML($test);
foreach ($dom->getElementsByTagName('img') as $i => $img) {
echo "IMG#{$i}:\n";
echo "\tsrc = " , $img->getAttribute('src') , "\n";
echo "\ttitle = " , $img->getAttribute('title') , "\n";
echo "\talt = " , $img->getAttribute('alt') , "\n";
echo "---\n";
}
Output:
IMG#0:
src = /image/fluffybunny.jpg
title = Harvey the bunny
alt = a cute little fluffy bunny
---
IMG#1:
src = /image/pricklycactus.jpg
title = Roger the cactus
alt = a big green prickly cactus
---
IMG#2:
src = /image/noisycockatoo.jpg
title = Polly the cockatoo
alt = an annoying white cockatoo
---
IMG#3:
src = somethingelse
title = something
alt =
---
Using this technique in professional code will leave you with a clean script, fewer hiccups to contend with, and fewer colleagues that wish you worked somewhere else.
A: If it's XHTML, your example is, you need only simpleXML.
<?php
$input = '<img src="/image/fluffybunny.jpg" title="Harvey the bunny" alt="a cute little fluffy bunny"/>';
$sx = simplexml_load_string($input);
var_dump($sx);
?>
Output:
object(SimpleXMLElement)#1 (1) {
["@attributes"]=>
array(3) {
["src"]=>
string(22) "/image/fluffybunny.jpg"
["title"]=>
string(16) "Harvey the bunny"
["alt"]=>
string(26) "a cute little fluffy bunny"
}
}
A: Here's A PHP Function I hobbled together from all of the above info for a similar purpose, namely adjusting image tag width and length properties on the fly ... a bit clunky, perhaps, but seems to work dependably:
function ReSizeImagesInHTML($HTMLContent,$MaximumWidth,$MaximumHeight) {
// find image tags
preg_match_all('/<img[^>]+>/i',$HTMLContent, $rawimagearray,PREG_SET_ORDER);
// put image tags in a simpler array
$imagearray = array();
for ($i = 0; $i < count($rawimagearray); $i++) {
array_push($imagearray, $rawimagearray[$i][0]);
}
// put image attributes in another array
$imageinfo = array();
foreach($imagearray as $img_tag) {
preg_match_all('/(src|width|height)=("[^"]*")/i',$img_tag, $imageinfo[$img_tag]);
}
// combine everything into one array
$AllImageInfo = array();
foreach($imagearray as $img_tag) {
$ImageSource = str_replace('"', '', $imageinfo[$img_tag][2][0]);
$OrignialWidth = str_replace('"', '', $imageinfo[$img_tag][2][1]);
$OrignialHeight = str_replace('"', '', $imageinfo[$img_tag][2][2]);
$NewWidth = $OrignialWidth;
$NewHeight = $OrignialHeight;
$AdjustDimensions = "F";
if($OrignialWidth > $MaximumWidth) {
$diff = $OrignialWidth-$MaximumHeight;
$percnt_reduced = (($diff/$OrignialWidth)*100);
$NewHeight = floor($OrignialHeight-(($percnt_reduced*$OrignialHeight)/100));
$NewWidth = floor($OrignialWidth-$diff);
$AdjustDimensions = "T";
}
if($OrignialHeight > $MaximumHeight) {
$diff = $OrignialHeight-$MaximumWidth;
$percnt_reduced = (($diff/$OrignialHeight)*100);
$NewWidth = floor($OrignialWidth-(($percnt_reduced*$OrignialWidth)/100));
$NewHeight= floor($OrignialHeight-$diff);
$AdjustDimensions = "T";
}
$thisImageInfo = array('OriginalImageTag' => $img_tag , 'ImageSource' => $ImageSource , 'OrignialWidth' => $OrignialWidth , 'OrignialHeight' => $OrignialHeight , 'NewWidth' => $NewWidth , 'NewHeight' => $NewHeight, 'AdjustDimensions' => $AdjustDimensions);
array_push($AllImageInfo, $thisImageInfo);
}
// build array of before and after tags
$ImageBeforeAndAfter = array();
for ($i = 0; $i < count($AllImageInfo); $i++) {
if($AllImageInfo[$i]['AdjustDimensions'] == "T") {
$NewImageTag = str_ireplace('width="' . $AllImageInfo[$i]['OrignialWidth'] . '"', 'width="' . $AllImageInfo[$i]['NewWidth'] . '"', $AllImageInfo[$i]['OriginalImageTag']);
$NewImageTag = str_ireplace('height="' . $AllImageInfo[$i]['OrignialHeight'] . '"', 'height="' . $AllImageInfo[$i]['NewHeight'] . '"', $NewImageTag);
$thisImageBeforeAndAfter = array('OriginalImageTag' => $AllImageInfo[$i]['OriginalImageTag'] , 'NewImageTag' => $NewImageTag);
array_push($ImageBeforeAndAfter, $thisImageBeforeAndAfter);
}
}
// execute search and replace
for ($i = 0; $i < count($ImageBeforeAndAfter); $i++) {
$HTMLContent = str_ireplace($ImageBeforeAndAfter[$i]['OriginalImageTag'],$ImageBeforeAndAfter[$i]['NewImageTag'], $HTMLContent);
}
return $HTMLContent;
}
A: Here is THE solution, in PHP:
Just download QueryPath, and then do as follows:
$doc= qp($myHtmlDoc);
foreach($doc->xpath('//img') as $img) {
$src= $img->attr('src');
$title= $img->attr('title');
$alt= $img->attr('alt');
}
That's it, you're done !
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "164"
}
|
Q: linux to compile multiple java file here is my directory structure.
/user/a
/user/b
/user/b
inside folder a,b,c there is a file person.java (it is the Same file, just a one line modification.
now, on my shell, im on my /user/ directory and i try to do
javac */person.java
the shell returns the following error,
person.java:14: duplicate class: person
Is there anything to resolve this?
A: I think the problem here might be, that javac tries to compile everything in one go, which naturally results in duplicated class definitions.
A simple way to resolve this would be
find . -name '*.java' -exec javac {} \;
Edit:
Or to be more precise find . -name 'person.java' -maxdepth 2 -exec javac {} \;
A: I would go for the small shell script:
for f in */person.java; do
javac $file
done
First line find all the files person.java in a sub-directory, second line compile the file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can I use ClearCase to "add to source control ..." recursively? I unpacked a zip-file delivery into a clearcase view. Now I want to add the complete file tree to the repository. The GUI only provides an "Add to source control ..." for individual files/directories. Do you know how to recursively add the whole tree?
(I'm on a Windows system, but have Cygwin installed.)
A: Since I did not have access to clearfsimport , I added the files/directories in a two step process:
1.) find . ! -path . -type d | xargs cleartool mkelem -mkpath -nc
This will create nodes for all new directories recursively
2.) find ./ -type f | xargs cleartool mkelem -nc
This will create nodes for all new files recursively
A: I would rather go with the clearfsimport script, better equipped to import multiple times the same set of files, and automatically:
*
*add new files,
*make new version of existing files previously imported (but modified in the source set of files re-imported)
*remove files already imported but no longer present in the source set of files.
*make a clear log of all operations made during the import process.
So if your 'zip-file delivery needs to be updated on a regularly basis, clearfsimport is the way to go, but with the following options:
clearfsimport -preview -rec -nset c:\sourceDir\* m:\MyView\MyVob\MyDestinationDirectory
Note the :
*
*-preview option: it will allow to check what would happen without actually doing anything.
*'*' used only in Windows environment, in order to import the content of a directory
*-nset option.
From CMWiki, about that 'nset' option:
By default, clearfsimport is meant to be used by the vob owner or a privileged user, but users often overlook the -nsetevent option, with which it may be used by any user.
This option drives clearfsimport not to set the time stamps of elements to this of the source file object outside the vob (which requires privileged access).
There is a minor non-obvious side-effect with this: once a version will have been created with a current time stamp, even the vob owner will not be able to import on top of it a version with an older (as it would be) time stamp, without this -nsetevent option. I.e. once you use this option, normal or privileged user, you are more or less bound to use it in the continuation.
A: ClearTeam Explorer, version 8 (maybe earlier as well), has recursive add of subdirectories/files when you select "Add to Source Control". When the "Add to Source Control" dialog box appears, check the "Include descendant artifacts of the selected directories" checkbox and uncheck the "Checkout descendant files only, do not checkout descendant directories" checkbox.
A: You have to use the commandline. The Context menu in Explorer doesnt do this recursively!
clearfsimport –recurse /usr/src/projectx /vobs/projectx/src
A: Heres a script to do it
And tips to integrate the script from Explorer
http://www.ibm.com/developerworks/rational/library/4687.html
A: Had a similar requirement to add a directory recursively to ClearCase. Since I did not have access to clearfsimport tool nor do I have ClearCase integrated with Windows Explorer, found an easy solution within ClearCase GUI.
1) Add the root directory using "Add to Source Control" menu option.
2) Click on this directory and then use "ClearCase Search" to search for all Private Files in this directory.
3) Select all from the Search Results and "Add to Source Control"
There you go ! The entire directory is recursively added from within ClearCase GUI
A: Here is one other way I found by using the Windows Explorer:
*
*Select Search... from the context menu on the target directory.
*Search for *.
*Select all (Ctrl-A) files/directories in the result list.
*Select ClearCase > Add to source control... from the context menu on an item in the result list.
There you go ...
A: you can get a fix at
http://www-01.ibm.com/support/docview.wss?ratlid=cctocbody&rs=984&uid=swg21117629
A: You can also add this command to your context menu with a small script...
Ten best Triggers
edit: oh, sorry. didn't saw that this was already suggested...
A: I agree,
find+select+add-to-source-control from Windows explorer is not a good option if the number of files to be version controlled is huge. As already mentioned above, explorer.exe crashes if we try to add a large number of files.
clearfsimport is the best and the most hassle free utility for this task.
-GP
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: Any decent text diff/merge engine for .NET? Requirements:
*
*free, preferably open-source
*implemented in one of the .NET managed langs
Google found these:
*
*A Generic, Reusable Diff
Algorithm on codeproject
*An O(ND) Difference Algorithm for C#
*Diff/Merge/Patch Library for C#/.NET by Joshua Tauberer
EDIT:
No apps please, only libraries.
A: You can grab the COM component that uses Google's Diff/Patch/Match. It works from .NET.
Update, 2010 Oct 17: The Google Diff/Patch/Merge code has been ported to C#. The COM component still works, but if you're coming from .NET, you'll wanna use the .NET port directly.
A: GitSharp includes a diff engine based on meyers diff. Take a look at the demo which implements a simple wpf diff viewer based on the Diff.Sections collection: http://www.eqqon.com/index.php/GitSharp#GitSharp.Demo
A: None of the answers so far (except possibly the GitSharp reference) deal with 3-way merge, so in case it helps anyone I recently ported Tony Garnock-Jones' javascript diff3 implementation (from the synchrotron project, based on Hunt and McIlroy 1976) to C#.
It's a simplistic single-file port of diff and three-way merge methods, but it's the standard algorithm and so far works very well for me: https://gist.github.com/2633407
A: I think the "Generic - Reusable Diff Algorithm in C#" on Codeproject is the best you can find as a .NET-Engine for diff/patch/merge. I made a project on my own with it and it fits my needs with most scenarios. There are one or two worst-case scencario when the algorithm made the patch-file larger than it have to be. But in most of the cases it works just fine for me (textfiles with a size of >30 MB).
I'm currently testing another Codeproject-Project you can find here: http://www.codeproject.com/KB/applications/patch.aspx
It's using some DLLs from Microsoft for patching, so it looks interesting. But those DLLs are unmanaged and this project is only some sort of wrapper for it. But maybe it can help you
Edit:
Just found another project, DiffPlex: http://diffplex.codeplex.com/
It's a combination of a .NET Diffing Library with both a Silverlight and HTML diff viewer. As stated there, DiffPlex is the library that CodePlex leverages to generate the diffs of files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114"
}
|
Q: Starting to learn Windbg After being troubled by an issue that I simply did not have the knowledge to debug, I've just decided that I have to learn how to use Windbg. My only problem: I have no clue where to start :-( I'm not really a WinApi-Guy, having use languages that abstract the Windows Api away from me usually.
So I just wonder: What is the best souce (Book, Website) to learn Windbg for someone who knows programming but not much about the inner depths of Windows? (And yes, I do read oldnewthing every day :))
A: *
*Advanced Windows Debugging by Hewardt and Pravat (best for general Win32 stuff)
*Debugging .Net 2.0 Applications by John Robbins (if you need SOS for .Net)
*The NT debugging blog (quite low level but they've just posted a good set of links.
A: There's a few excellent blogs out there that help to gain windbg proficiency on an everyday basis:
*
*Dr. Debugalov
*Nynaeve
*Advanced Windows Debugging
*Debugging Toolbox
*Debugging Tricks
*Oleg Starodumov
*List of posts from/to Ivan Brugiolo
*Windbg by Volker von Einem
I, personally, just started using windbg for all my debugging tasks and soon enough there were very questions I could not answer and very few problems I could not solve. Powerful and exciting tool.
A: For a book, try
Advanced Windows Debugging (Addison-Wesley Microsoft Technology Series)
(source: knowfree.net)
Also, for a great reference sheet, see
Common WinDbg Commands (Thematically Grouped) by Robert Kuster.
A: A great blog to check out is If broken it is, fix it you should. There's actually some posts on getting started in WinDbg.
A: My first experience with a debugger (actually soft ICE) has been a ... well ... sort of crack.
There are some guide on the net about how to use a debugger to search for events and bypass/change program behavious. Once you've mastered the basic debugger skills, you can take any simple application (or your own applications) and play with it.
(This is just one of the guides i was talking about: http://www.woodmann.com/krobar/other/patch36.txt )
A: Debugging .NEt Applications has a chapter on how to use WinDbg
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: Aborting an ASP.NET Web Service asynchronous call I have a web service which takes quite some time to complete execution, and i am calling this web service asynchronously. I also want to implement an Abort functionality which will abort the web service method. Currently, i am observing that even if I dispose the requesting Web Service object, the web service completes it's execution in the server side anyways. How do i achieve an Abort functionality?
A: you call the Abort() method on the web service proxy object itself. don't simply dispose of it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Playground Projects When I am unsure about some thing for a project, I usually use a small separate project and make it my playground for things need to be tested. How do you do that ?
A: This depends on what I want to try out. For simple algorithmic stuff, I have a console application that consists of many classes, one for every thing I want to try out. This way I have everything inside a single project and can browse through the ideas and approaches I've tried out during the years. I use folders for new topics and postfix the classnames with an increasing index (or something similiar that makes it easy to see the difference in the implementations by just looking at the class name), when I try out different solutions for a problem.
The only maintenance I do on the classes is do to filter out the things that do not compile (anymore), but wrong approaches are only marked by extensive comments in the respective class files. For me this is also a good way to see how my skills improve over time... and it's quite funny to look at old code, too.
I have a similiar approach for GUI related things as well as ASP.net application, so that I have a total of three test projects, all organized the way described above.
A: I use 'spike tests' idea I first seen in 'TDD adventures in C#' by Ron Jeffries.
Spike tests are coded as unit test classes, with chunks of code you want to try out instead of test methods.
This way, you can easily try out some code you're not comfortable with by running it in test runner.
I usually place spike tests in the same project with unit tests. Once in repository, spike test code can help other developers understand your decisions in production code.
A: I do it the same way. A temporary project has some disadvantages... You have to setup a new project as soon as you want to test something else. Also I found playgrounds to be pretty good references. Often I remember that I tried something previously and then I can look into the old playground files and even change it to adapt to new requirements.
A: I don't use a whole subproject as playground - I normally make a simple testcase and if it works as expected I merge the code into my project. During the time as programmer there are lots of this textcases and I always keep them. It's good to have this examples - so whenever I am thinking about something I always have a look at my testcases first.
A: Sometimes I use a simple console app to test things out.
Other times, I clone (or branch) the main project and try things out in the copy. Often times, the clone gets copied back over (or merged into) the original once I've finished trying things out.
Still other times, I make sure the main project is checked in to source control, and then try things out there. If I don't like how things are working out, I roll back the changes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamic radio button creation In wxPython, if I create a list of radio buttons and place the list initially, is it possible to change the contents in that list later?
For example, I have a panel that uses a boxSizer to place the widgets initially. One of those widgets is a list of radio buttons (I have also tried a normal radiobox). I would like to dynamically change the list based on variables from another class.
However, once the list is placed in the sizer, it's effectively "locked"; I can't just modify the list and have the changes appear. If I try re-adding the list to the sizer, it just gets put in the top left corner of the panel.
I'm sure I could hide the original list and manually place the new list in the same position but that feels like a kludge. I'm sure I'm making this harder than it is. I'm probably using the wrong widgets for this, much less the wrong approach, but I'm building this as a learning experience.
class Job(wiz.WizardPageSimple):
"""Character's job class."""
def __init__(self, parent, title, attribs):
wiz.WizardPageSimple.__init__(self, parent)
self.next = self.prev = None
self.sizer = makePageTitle(self, title)
self.charAttribs = attribs
#---Create widgets
self.Job_list = ["Aircraft Mechanic", "Vehicle Mechanic", "Electronics Specialist"]
box1_title = wx.StaticBox( self, -1, "" )
box1 = wx.StaticBoxSizer( box1_title, wx.VERTICAL )
grid1 = wx.BoxSizer(wx.VERTICAL)
for item in self.Job_list:
radio = wx.RadioButton(self, -1, item)
grid1.Add(radio)
##Debugging
self.btn = wx.Button(self, -1, "click")
self.Bind(wx.EVT_BUTTON, self.eligibleJob, self.btn)
#---Place widgets
self.sizer.Add(self.Job_intro)
self.sizer.Add(self.btn)
box1.Add(grid1)
self.sizer.Add(box1)
def eligibleJob(self, event):
"""Determine which Jobs a character is eligible for."""
if self.charAttribs.intelligence >= 12:
skillList = ["Analyst", "Interrogator", "Fire Specialist", "Aircraft Pilot"]
for skill in skillList:
self.Job_list.append(skill)
print self.Job_list ##Debugging
#return self.Job_list
A: To make new list elements appear in correct places, you have to re-layout the grid after adding new elements to it. For example, to add a few new items, you could call:
def addNewSkills(self, newSkillList):
'''newSkillList is a list of skill names you want to add'''
for skillName in newSkillList:
newRadioButton = wx.RadioButton(self, -1, skillName)
self.grid1.Add(newRadioButton) # appears in top-left corner of the panel
self.Layout() # all newly added radio buttons appear where they should be
self.Fit() # if you need to resize the panel to fit new items, this will help
where self.grid1 is the sizer you keep all your radio buttons on.
A: Two possible solutions
*
*Rebuild the sizer with the radio widgets each time you have to make a change
*Hold the radio button widgets in a list, and call SetLabel each time you have to change their labels.
A: I was able to fix it by using the info DzinX provided, with some modification.
It appears that posting the radio buttons box first "locked in" the box to the sizer. If I tried to add a new box, I would get an error message stating that I was trying to add the widget to the same sizer twice.
By simply removing the radio buttons initially and having the user click a button to call a method, I could simply add a the list of radio buttons without a problem.
Additionally, by having the user click a button, I did not run into errors of "class Foo has no attribute 'bar'". Apparently, when the wizard initalizes, the attributes aren't available to the rest of the wizard pages. I had thought the wizard pages were dynamically created with each click of "Next" but they are all created at the same time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Calling .NET assembly from Java: JVM crashes I have a third party .NET Assembly and a large Java application. I need to call mothods provided by the .NET class library from the Java application. The assembly is not COM-enabled.
I have searched the net and so far i have the following:
C# code (cslib.cs):
using System;
namespace CSLib
{
public class CSClass
{
public static void SayHi()
{
System.Console.WriteLine("Hi");
}
}
}
compiled with (using .net 3.5, but the same happens when 2.0 is used):
csc /target:library cslib.cs
C++ code (clib.cpp):
#include <jni.h>
#using <CSLib.dll>
using namespace CSLib;
extern "C" _declspec(dllexport) void Java_CallCS_callCS(JNIEnv* env, jclass cls) {
CSLib::CSClass::SayHi();
}
compiled with (using VC 2008 tools, but the same happens when 2003 tools are used):
cl /clr /LD clib.cpp
mt -manifest clib.dll.manifest -outputresource:clib.dll;2
Java code (CallCS.java):
class CallCS {
static {
System.loadLibrary("clib");
}
private static native void callCS();
public static void main(String[] args) {
callCS();
}
}
When I try to run the java class, the Java VM crashes while invoking the method (it is able to load the library):
#
# An unexpected error has been detected by Java Runtime Environment:
#
# Internal Error (0xe0434f4d), pid=3144, tid=3484
#
# Java VM: Java HotSpot(TM) Client VM (10.0-b19 mixed mode, sharing windows-x86)
# Problematic frame:
# C [kernel32.dll+0x22366]
#
...
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j CallCS.callCS()V+0
j CallCS.main([Ljava/lang/String;)V+0
v ~StubRoutines::call_stub
However, if I create a plain cpp application that loads clib.dll and calls the exported function Java_CallCS_callCS, everything is OK.
I have tried this on both x86 and x64 environments and the result is the same. I have not tried other versions of Java, but I need the code to run on 1.5.0.
Moreover, if I modify clib.cpp to call only System methods everything works fine even from Java:
#include <jni.h>
#using <mscorlib.dll>
using namespace System;
extern "C" _declspec(dllexport) void Java_CallCS_callCS(JNIEnv* env, jclass cls) {
System::Console::WriteLine("It works");
}
To wrap up:
*
*I am ABLE to call System methods from Java -> clib.dll -> mscorlib.dll
*I am ABLE to call any methods from CPPApp -> clib.dll -> cslib.dll
*I am UNABLE to call any methods from Java -> clib.dll -> cslib.dll
I am aware of a workaround that uses 1. above - I can use reflection to load the assmebly and invoke desired methods using only System calls, but the code gets messy and I am hoping for a better solution.
I know about dotnetfromjava project, which uses the reflection method, but prefer not to add more complexity than needed. I'll use something like this if there is no other way, however.
I have looked at ikvm.net also, but my understanding is that it uses its own JVM (written in C#) to do the magic. However, running the entire Java application under its VM is no option for me.
Thanks.
A: Look at jni4net, it will do the hard work for you.
A: Have you looked at ikvm.NET, which allows calls between .NET and Java code?
A: OK, the mystery is solved.
The JVM crash is caused by unhandled System.IO.FileNotFoundException. The exception is thrown because the .NET assembly is searched in the folder where the calling exe file resides.
*
*The mscorlib.dll is in the Global Assembly Cache, so it works.
*The CPP application exe is in the same folder as the assembly, so it works also.
*The cslib.dll assembly is NEITHER in the folder of java.exe, NOR in the GAC, so it doesn't work.
It seems my only option is to install the .NET assembly in GAC (the third-party dll does have a strong name).
A: I was so glad to find this article since I got stuck and had exactly that problem.
I want to contribute some code, which helps to overcome this problem.
In your Java constructor call the init method, which adds the resolve event.
My experience it is necessary to call init NOT just before the call into your library in c++ code, since due to timing problems it may crash nonetheless.
I've put the init call into my java class constructor of mapping the JNI calls, which works great.
//C# code
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Reflection;
using System.Security.Permissions;
using System.Runtime.InteropServices;
namespace JNIBridge
{
public class Temperature
{
[SecurityPermission(SecurityAction.Assert, Flags = SecurityPermissionFlag.UnmanagedCode | SecurityPermissionFlag.Assertion | SecurityPermissionFlag.Execution)]
[ReflectionPermission(SecurityAction.Assert, Unrestricted = true)]
[FileIOPermission(SecurityAction.Assert, Unrestricted = true)]
public static double toFahrenheit(double value)
{
return (value * 9) / 5 + 32;
}
[SecurityPermission(SecurityAction.Assert, Flags = SecurityPermissionFlag.UnmanagedCode | SecurityPermissionFlag.Assertion | SecurityPermissionFlag.Execution)]
[ReflectionPermission(SecurityAction.Assert, Unrestricted = true)]
[FileIOPermission(SecurityAction.Assert, Unrestricted = true)]
public static double toCelsius(double value)
{
return (value - 32) * 5 / 9;
}
}
}
C++ Code
// C++ Code
#include "stdafx.h"
#include "JNIMapper.h"
#include "DotNet.h"
#include "stdio.h"
#include "stdlib.h"
#ifdef __cplusplus
extern "C" {
#endif
/*
* Class: DotNet
* Method: toFahrenheit
* Signature: (D)D
*/
static bool initialized = false;
using namespace System;
using namespace System::Reflection;
/***
This is procedure is always needed when the .NET dll's arent in the actual directory of the calling exe!!!
It loads the needed assembly from a predefined path, if found in the directory and returns the assembly.
*/
Assembly ^OnAssemblyResolve(Object ^obj, ResolveEventArgs ^args)
{
//System::Console::WriteLine("In OnAssemblyResolve");
#ifdef _DEBUG
/// Change to your .NET DLL paths here
String ^path = gcnew String("d:\\WORK\\JNIBridge\\x64\\Debug");
#else
String ^path = gcnew String(_T("d:\\WORK\\JNIBridge\\x64\\Release"));
#endif
array<String^>^ assemblies =
System::IO::Directory::GetFiles(path, "*.dll");
for (long ii = 0; ii < assemblies->Length; ii++) {
AssemblyName ^name = AssemblyName::GetAssemblyName(assemblies[ii]);
if (AssemblyName::ReferenceMatchesDefinition(gcnew AssemblyName(args->Name), name)) {
// System::Console::WriteLine("Try to resolve "+ name);
Assembly ^a = Assembly::Load(name);
//System::Console::WriteLine("Resolved "+ name);
return a;
}
}
return nullptr;
}
/**
This procedure adds the Assembly resolve event handler
*/
void AddResolveEvent()
{
AppDomain::CurrentDomain->AssemblyResolve +=
gcnew ResolveEventHandler(OnAssemblyResolve);
}
/*
* Class: DotNet
* Method: init
* Signature: ()Z
*/
JNIEXPORT jboolean JNICALL Java_DotNet_init
(JNIEnv *, jobject)
{
printf("In init\n");
AddResolveEvent();
printf("init - done.\n");
return true;
}
/*
* Class: DotNet
* Method: toFahrenheit
* Signature: (D)D
*/
JNIEXPORT jdouble JNICALL Java_DotNet_toFahrenheit
(JNIEnv * je, jobject jo, jdouble value)
{
printf("In Java_DotNet_toFahrenheit\n");
double result = 47;
try{
result = JNIBridge::Temperature::toFahrenheit(value);
} catch (...){
printf("Error caught");
}
return result;
}
/*
* Class: DotNet
* Method: toCelsius
* Signature: (D)D
*/
JNIEXPORT jdouble JNICALL Java_DotNet_toCelsius
(JNIEnv * je, jobject jo , jdouble value){
printf("In Java_DotNet_toCelsius\n");
double result = 11;
try{
result = JNIBridge::Temperature::toCelsius(value);
} catch (...){
printf("Error caught");
}
return result;
}
#ifdef __cplusplus
}
Java code
/***
** Java class file
**/
public class DotNet {
public native double toFahrenheit (double d);
public native double toCelsius (double d);
public native boolean init();
static {
try{
System.loadLibrary("JNIMapper");
} catch(Exception ex){
ex.printStackTrace();
}
}
public DotNet(){
init();
}
public double fahrenheit (double v) {
return toFahrenheit(v);
}
public double celsius (double v) {
return toCelsius(v);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How much faster is C++ than C#? Or is it now the other way around?
From what I've heard there are some areas in which C# proves to be faster than C++, but I've never had the guts to test it by myself.
Thought any of you could explain these differences in detail or point me to the right place for information on this.
A: It's five oranges faster. Or rather: there can be no (correct) blanket answer. C++ is a statically compiled language (but then, there's profile guided optimization, too), C# runs aided by a JIT compiler. There are so many differences that questions like “how much faster” cannot be answered, not even by giving orders of magnitude.
A: > From what I've heard ...
Your difficulty seems to be in deciding whether what you have heard is credible, and that difficulty will just be repeated when you try to assess the replies on this site.
How are you going to decide if the things people say here are more or less credible than what you originally heard?
One way would be to ask for evidence.
When someone claims "there are some areas in which C# proves to be faster than C++" ask them why they say that, ask them to show you measurements, ask them to show you programs. Sometimes they will simply have made a mistake. Sometimes you'll find out that they are just expressing an opinion rather than sharing something that they can show to be true.
Often information and opinion will be mixed up in what people claim, and you'll have to try and sort out which is which. For example, from the replies in this forum:
*
*"Take the benchmarks at http://shootout.alioth.debian.org/
with a great deal of scepticism, as
these largely test arithmetic code,
which is most likely not similar to
your code at all."
Ask yourself if you really
understand what "these largely test
arithmetic code" means, and then
ask yourself if the author has
actually shown you that his claim is
true.
*"That's a rather useless test, since it really depends on how well
the individual programs have been
optimized; I've managed to speed up
some of them by 4-6 times or more,
making it clear that the comparison
between unoptimized programs is
rather silly."
Ask yourself whether the author has
actually shown you that he's managed
to "speed up some of them by 4-6
times or more" - it's an easy claim to make!
A: In my experience (and I have worked a lot with both languages), the main problem with C# compared to C++ is high memory consumption, and I have not found a good way to control it. It was the memory consumption that would eventually slow down .NET software.
Another factor is that JIT compiler cannot afford too much time to do advanced optimizations, because it runs at runtime, and the end user would notice it if it takes too much time. On the other hand, a C++ compiler has all the time it needs to do optimizations at compile time. This factor is much less significant than memory consumption, IMHO.
A: For 'embarassingly parallel' problems, when using Intel TBB and OpenMP on C++ I have observed a roughly 10x performance increase compared to similar (pure math) problems done with C# and TPL. SIMD is one area where C# cannot compete, but I also got the impression that TPL has a sizeable overhead.
That said, I only use C++ for performance-critical tasks where I know I will be able to multithread and get results quickly. For everything else, C# (and occasionally F#) is just fine.
A: There is no strict reason why a bytecode based language like C# or Java that has a JIT cannot be as fast as C++ code. However C++ code used to be significantly faster for a long time, and also today still is in many cases. This is mainly due to the more advanced JIT optimizations being complicated to implement, and the really cool ones are only arriving just now.
So C++ is faster, in many cases. But this is only part of the answer. The cases where C++ is actually faster, are highly optimized programs, where expert programmers thoroughly optimized the hell out of the code. This is not only very time consuming (and thus expensive), but also commonly leads to errors due to over-optimizations.
On the other hand, code in interpreted languages gets faster in later versions of the runtime (.NET CLR or Java VM), without you doing anything. And there are a lot of useful optimizations JIT compilers can do that are simply impossible in languages with pointers. Also, some argue that garbage collection should generally be as fast or faster as manual memory management, and in many cases it is. You can generally implement and achieve all of this in C++ or C, but it's going to be much more complicated and error prone.
As Donald Knuth said, "premature optimization is the root of all evil". If you really know for sure that your application will mostly consist of very performance critical arithmetic, and that it will be the bottleneck, and it's certainly going to be faster in C++, and you're sure that C++ won't conflict with your other requirements, go for C++. In any other case, concentrate on first implementing your application correctly in whatever language suits you best, then find performance bottlenecks if it runs too slow, and then think about how to optimize the code. In the worst case, you might need to call out to C code through a foreign function interface, so you'll still have the ability to write critical parts in lower level language.
Keep in mind that it's relatively easy to optimize a correct program, but much harder to correct an optimized program.
Giving actual percentages of speed advantages is impossible, it largely depends on your code. In many cases, the programming language implementation isn't even the bottleneck. Take the benchmarks at http://benchmarksgame.alioth.debian.org/ with a great deal of scepticism, as these largely test arithmetic code, which is most likely not similar to your code at all.
A: It's an extremely vague question without real definitive answers.
For example; I'd rather play 3D-games that are created in C++ than in C#, because the performance is certainly a lot better. (And I know XNA, etc., but it comes no way near the real thing).
On the other hand, as previously mentioned; you should develop in a language that lets you do what you want quickly, and then if necessary optimize.
A: In theory, for long running server-type application, a JIT-compiled language can become much faster than a natively compiled counterpart. Since the JIT compiled language is generally first compiled to a fairly low-level intermediate language, you can do a lot of the high-level optimizations right at compile time anyway. The big advantage comes in that the JIT can continue to recompile sections of code on the fly as it gets more and more data on how the application is being used. It can arrange the most common code-paths to allow branch prediction to succeed as often as possible. It can re-arrange separate code blocks that are often called together to keep them both in the cache. It can spend more effort optimizing inner loops.
I doubt that this is done by .NET or any of the JREs, but it was being researched back when I was in university, so it's not unreasonable to think that these sort of things may find their way into the real world at some point soon.
A: Applications that require intensive memory access eg. image manipulation are usually better off written in unmanaged environment (C++) than managed (C#). Optimized inner loops with pointer arithmetics are much easier to have control of in C++. In C# you might need to resort to unsafe code to even get near the same performance.
A: .NET languages can be as fast as C++ code, or even faster, but C++ code will have a more constant throughput as the .NET runtime has to pause for GC, even if it's very clever about its pauses.
So if you have some code that has to consistently run fast without any pause, .NET will introduce latency at some point, even if you are very careful with the runtime GC.
A: I've tested vector in C++ and C# equivalent - List and simple 2d arrays.
I'm using Visual C#/C++ 2010 Express editions. Both projects are simple console applications, I've tested them in standard (no custom settings) release and debug mode.
C# lists run faster on my pc, array initialization is also faster in C#, math operations are slower.
I'm using Intel Core2Duo P8600@2.4GHz, C# - .NET 4.0.
I know that vector implementation is different than C# list, but I just wanted to test collections that I would use to store my objects (and being able to use index accessor).
Of course you need to clear memory (let's say for every use of new), but I wanted to keep the code simple.
C++ vector test:
static void TestVector()
{
clock_t start,finish;
start=clock();
vector<vector<double>> myList=vector<vector<double>>();
int i=0;
for( i=0; i<500; i++)
{
myList.push_back(vector<double>());
for(int j=0;j<50000;j++)
myList[i].push_back(j+i);
}
finish=clock();
cout<<(finish-start)<<endl;
cout<<(double(finish - start)/CLOCKS_PER_SEC);
}
C# list test:
private static void TestVector()
{
DateTime t1 = System.DateTime.Now;
List<List<double>> myList = new List<List<double>>();
int i = 0;
for (i = 0; i < 500; i++)
{
myList.Add(new List<double>());
for (int j = 0; j < 50000; j++)
myList[i].Add(j *i);
}
DateTime t2 = System.DateTime.Now;
Console.WriteLine(t2 - t1);
}
C++ - array:
static void TestArray()
{
cout << "Normal array test:" << endl;
const int rows = 5000;
const int columns = 9000;
clock_t start, finish;
start = clock();
double** arr = new double*[rows];
for (int i = 0; i < rows; i++)
arr[i] = new double[columns];
finish = clock();
cout << (finish - start) << endl;
start = clock();
for (int i = 0; i < rows; i++)
for (int j = 0; j < columns; j++)
arr[i][j] = i * j;
finish = clock();
cout << (finish - start) << endl;
}
C# - array:
private static void TestArray()
{
const int rows = 5000;
const int columns = 9000;
DateTime t1 = System.DateTime.Now;
double[][] arr = new double[rows][];
for (int i = 0; i < rows; i++)
arr[i] = new double[columns];
DateTime t2 = System.DateTime.Now;
Console.WriteLine(t2 - t1);
t1 = System.DateTime.Now;
for (int i = 0; i < rows; i++)
for (int j = 0; j < columns; j++)
arr[i][j] = i * j;
t2 = System.DateTime.Now;
Console.WriteLine(t2 - t1);
}
Time: (Release/Debug)
C++
*
*600 / 606 ms array init,
*200 / 270 ms array fill,
*1sec /13sec vector init & fill.
(Yes, 13 seconds, I always have problems with lists/vectors in debug mode.)
C#:
*
*20 / 20 ms array init,
*403 / 440 ms array fill,
*710 / 742 ms list init & fill.
A: One particular scenario where C++ still has the upper hand (and will, for years to come) occurs when polymorphic decisions can be predetermined at compile time.
Generally, encapsulation and deferred decision-making is a good thing because it makes the code more dynamic, easier to adapt to changing requirements and easier to use as a framework. This is why object oriented programming in C# is very productive and it can be generalized under the term “generalization”. Unfortunately, this particular kind of generalization comes at a cost at run-time.
Usually, this cost is non-substantial but there are applications where the overhead of virtual method calls and object creation can make a difference (especially since virtual methods prevent other optimizations such as method call inlining). This is where C++ has a huge advantage because you can use templates to achieve a different kind of generalization which has no impact on runtime but isn't necessarily any less polymorphic than OOP. In fact, all of the mechanisms that constitute OOP can be modelled using only template techniques and compile-time resolution.
In such cases (and admittedly, they're often restricted to special problem domains), C++ wins against C# and comparable languages.
A: C++ (or C for that matter) gives you fine-grained control over your data structures. If you want to bit-twiddle you have that option. Large managed Java or .NET apps (OWB, Visual Studio 2005) that use the internal data structures of the Java/.NET libraries carry the baggage with them. I've seen OWB designer sessions using over 400 MB of RAM and BIDS for cube or ETL design getting into the 100's of MB as well.
On a predictable workload (such as most benchmarks that repeat a process many times) a JIT can get you code that is optimised well enough that there is no practical difference.
IMO on large applications the difference is not so much the JIT as the data structures that the code itself is using. Where an application is memory-heavy you will get less efficient cache usage. Cache misses on modern CPUs are quite expensive. Where C or C++ really win is where you can optimise your usage of data structures to play nicely with the CPU cache.
A: I suppose there are applications written in C# running fast, as well as there are more C++ written apps running fast (well C++ just older... and take UNIX too...)
- the question indeed is - what is that thing, users and developers are complaining about ...
Well, IMHO, in case of C# we have very comfort UI, very nice hierarchy of libraries, and whole interface system of CLI. In case of C++ we have templates, ATL, COM, MFC and whole shebang of alreadyc written and running code like OpenGL, DirectX and so on... Developers complains of indeterminably risen GC calls in case of C# (means program runs fast, and in one second - bang! it's stuck).
To write code in C# very simple and fast (not to forget that also increase chance of errors.
In case of C++, developers complains of memory leaks, - means crushes, calls between DLLs, as well as of "DLL hell" - problem with support and replacement libraries by newer ones...
I think more skill you'll have in the programming language, the more quality (and speed) will characterize your software.
A: Well, it depends. If the byte-code is translated into machine-code (and not just JIT) (I mean if you execute the program) and if your program uses many allocations/deallocations it could be faster because the GC algorithm just need one pass (theoretically) through the whole memory once, but normal malloc/realloc/free C/C++ calls causes an overhead on every call (call-overhead, data-structure overhead, cache misses ;) ).
So it is theoretically possible (also for other GC languages).
I don't really see the extreme disadvantage of not to be able to use metaprogramming with C# for the most applications, because the most programmers don't use it anyway.
Another big advantage is that the SQL, like the LINQ "extension", provides opportunities for the compiler to optimize calls to databases (in other words, the compiler could compile the whole LINQ to one "blob" binary where the called functions are inlined or for your use optimized, but I'm speculating here).
A: I would put it this way: programmers who write faster code, are the ones who are the more informed of what makes current machines go fast, and incidentally they are also the ones who use an appropriate tool that allows for precise low-level and deterministic optimisation techniques. For these reasons, these people are the ones who use C/C++ rather than C#. I would go as far as stating this as a fact.
A: If I'm not mistaken, C# templates are determined at runtime. This must be slower than compile time templates of C++.
And when you take in all the other compile-time optimizations mentioned by so many others, as well as the lack of safety that does, indeed, mean more speed...
I'd say C++ is the obvious choice in terms of raw speed and minimum memory consumption. But this also translates into more time developing the code and ensuring you aren't leaking memory or causing any null pointer exceptions.
Verdict:
*
*C#: Faster development, slower run
*C++: Slow development, faster run.
A: There are some major differences between C# and C++ on the performance aspect:
*
*C# is GC / heap based. The allocation and GC itself is overhead as the non locality of the memory access
*C++ optimizer's have become very good over the years. JIT compilers cannot achieve the same level since they have only limited compilation time and don't see the global scope
Besides that programmer competence plays also a role. I have seen bad C++ code where classes where passed by value as argument all over the place. You can actually make the performance worse in C++ if you don't know what you are doing.
A: I found this April 2020 read: https://www.quora.com/Why-is-C-so-slow-compared-to-Python by a real-world programmer with 15+ years of Software Development experience.
It states that C# is slower usually because it is compiled to Common Intermediate Language (CIL) instead of machine code like C++. The CIL is then put through Common Language Runtime (CLR) which outputs machine code. However, if you keep executing C# it will take the output of the machine code and cache it so the machine code is saved for the next execution. All in all, C# can be faster if you execute multiple times since it is in machine code after multiple executions.
There is also comments that a good C++ programmer can do optimizations that can be time consuming that will in end be optimized.
A: For graphics the standard C# Graphics class is way slower than GDI accessed via C/C++.
I know this has nothing to do with the language per se, more with the total .NET platform, but Graphics is what is offered to the developer as a GDI replacement, and its performance is so bad I wouldn't even dare to do graphics with it.
We have a simple benchmark we use to see how fast a graphics library is, and that is simply drawing random lines in a window. C++/GDI is still snappy with 10000 lines while C#/Graphics has difficulty doing 1000 in real-time.
A: The garbage collection is the main reason Java# CANNOT be used for real-time systems.
*
*When will the GC happen?
*How long will it take?
This is non-deterministic.
A: I'm going to start by disagreeing with part of the accepted (and well-upvoted) answer to this question by stating:
There are actually plenty of reasons why JITted code will run slower than a properly optimized C++ (or other language without runtime overhead)
program including:
*
*compute cycles spent on JITting code at runtime are by definition unavailable for use in program execution.
*any hot paths in the JITter will be competing with your code for instruction and data cache in the CPU. We know that cache dominates when it comes to performance and native languages like C++ do not have this type of contention, by design.
*a run-time optimizer's time budget is necessarily much more constrained than that of a compile-time optimizer's (as another commenter pointed out)
Bottom line: Ultimately, you will almost certainly be able to create a faster implementation in C++ than you could in C#.
Now, with that said, how much faster really isn't quantifiable, as there are too many variables: the task, problem domain, hardware, quality of implementations, and many other factors. You'll have run tests on your scenario to determine the the difference in performance, and then decide whether it is worth the the additional effort and complexity.
This is a very long and complex topic, but I feel it's worth mentioning for the sake of completeness that C#'s runtime optimizer is excellent, and is able to perform certain dynamic optimizations at runtime that are simply not available to C++ with its compile-time (static) optimizer. Even with this, the advantage is still typically deeply in the native application's court, but the dynamic optimizer is the reason for the "almost certainly" qualifier given above.
--
In terms of relative performance, I was also disturbed by the figures and discussions I saw in some other answers, so I thought I'd chime in and at the same time, provide some support for the statements I've made above.
A huge part of the problem with those benchmarks is you can't write C++ code as if you were writing C# and expect to get representative results (eg. performing thousands of memory allocations in C++ is going to give you terrible numbers.)
Instead, I wrote slightly more idiomatic C++ code and compared against the C# code @Wiory provided. The two major changes I made to the C++ code were:
*
*used vector::reserve()
*flattened the 2d array to 1d to achieve better cache locality (contiguous block)
C# (.NET 4.6.1)
private static void TestArray()
{
const int rows = 5000;
const int columns = 9000;
DateTime t1 = System.DateTime.Now;
double[][] arr = new double[rows][];
for (int i = 0; i < rows; i++)
arr[i] = new double[columns];
DateTime t2 = System.DateTime.Now;
Console.WriteLine(t2 - t1);
t1 = System.DateTime.Now;
for (int i = 0; i < rows; i++)
for (int j = 0; j < columns; j++)
arr[i][j] = i;
t2 = System.DateTime.Now;
Console.WriteLine(t2 - t1);
}
Run time (Release): Init: 124ms, Fill: 165ms
C++14 (Clang v3.8/C2)
#include <iostream>
#include <vector>
auto TestSuite::ColMajorArray()
{
constexpr size_t ROWS = 5000;
constexpr size_t COLS = 9000;
auto initStart = std::chrono::steady_clock::now();
auto arr = std::vector<double>();
arr.reserve(ROWS * COLS);
auto initFinish = std::chrono::steady_clock::now();
auto initTime = std::chrono::duration_cast<std::chrono::microseconds>(initFinish - initStart);
auto fillStart = std::chrono::steady_clock::now();
for(auto i = 0, r = 0; r < ROWS; ++r)
{
for (auto c = 0; c < COLS; ++c)
{
arr[i++] = static_cast<double>(r * c);
}
}
auto fillFinish = std::chrono::steady_clock::now();
auto fillTime = std::chrono::duration_cast<std::chrono::milliseconds>(fillFinish - fillStart);
return std::make_pair(initTime, fillTime);
}
Run time (Release): Init: 398µs (yes, that's microseconds), Fill: 152ms
Total Run times: C#: 289ms, C++ 152ms (roughly 90% faster)
Observations
*
*Changing the C# implementation to the same 1d array implementation
yielded Init: 40ms, Fill: 171ms, Total: 211ms (C++ was still almost
40% faster).
*It is much harder to design and write "fast" code in C++ than it is to write "regular" code in either language.
*It's (perhaps) astonishingly easy to get poor performance in C++; we saw that with unreserved vectors performance. And there are lots of pitfalls like this.
*C#'s performance is rather amazing when you consider all that is going on at runtime. And that performance is comparatively easy to
access.
*More anecdotal data comparing the performance of C++ and C#: https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=gpp&lang2=csharpcore
The bottom line is that C++ gives you much more control over performance. Do you want to use a pointer? A reference? Stack memory? Heap? Dynamic polymorphism or eliminate the runtime overhead of a vtable with static polymorphism (via templates/CRTP)? In C++ you have to... er, get to make all these choices (and more) yourself, ideally so that your solution best addresses the problem you're tackling.
Ask yourself if you actually want or need that control, because even for the trivial example above, you can see that although there is a significant improvement in performance, it requires a deeper investment to access.
A: We have had to determine if C# was comparable to C++ in performance and I wrote some test programs for that (using Visual Studio 2005 for both languages). It turned out that without garbage collection and only considering the language (not the framework) C# has basically the same performance as C++. Memory allocation is way faster in C# than in C++ and C# has a slight edge in determinism when data sizes are increased beyond cache line boundaries. However, all of this had eventually to be paid for and there is a huge cost in the form of non-deterministic performance hits for C# due to garbage collection.
A: C/C++ can perform vastly better in programs where there are either large arrays or heavy looping/iteration over arrays (of any size). This is the reason that graphics are generally much faster in C/C++, because heavy array operations underlie almost all graphics operations. .NET is notoriously slow in array indexing operations due to all the safety checks, and this is especially true for multi-dimensional arrays (and, yes, rectangular C# arrays are even slower than jagged C# arrays).
The bonuses of C/C++ are most pronounced if you stick directly with pointers and avoid Boost, std::vector and other high-level containers, as well as inline every small function possible. Use old-school arrays whenever possible. Yes, you will need more lines of code to accomplish the same thing you did in Java or C# as you avoid high-level containers. If you need a dynamically sized array, you will just need to remember to pair your new T[] with a corresponding delete[] statement (or use std::unique_ptr)—the price for the extra speed is that you must code more carefully. But in exchange, you get to rid yourself of the overhead of managed memory / garbage collector, which can easily be 20% or more of the execution time of heavily object-oriented programs in both Java and .NET, as well as those massive managed memory array indexing costs. C++ apps can also benefit from some nifty compiler switches in certain specific cases.
I am an expert programmer in C, C++, Java, and C#. I recently had the rare occasion to implement the exact same algorithmic program in the latter 3 languages. The program had a lot of math and multi-dimensional array operations. I heavily optimized this in all 3 languages. The results were typical of what I normally see in less rigorous comparisons: Java was about 1.3x faster than C# (most JVMs are more optimized than the CLR), and the C++ raw pointer version came in about 2.1x faster than C#. Note that the C# program only used safe code—it is my opinion that you might as well code it in C++ before using the unsafe keyword.
Lest anyone think I have something against C#, I will close by saying that C# is probably my favorite language. It is the most logical, intuitive and rapid development language I've encountered so far. I do all my prototyping in C#. The C# language has many small, subtle advantages over Java (yes, I know Microsoft had the chance to fix many of Java's shortcomings by entering the game late and arguably copying Java). Toast to Java's Calendar class anyone? If Microsoft ever spends real effort to optimize the CLR and the .NET JITter, C# could seriously take over. I'm honestly surprised they haven't already—they did so many things right in the C# language, why not follow it up with heavy-hitting compiler optimizations? Maybe if we all beg.
A: As usual, it depends on the application. There are cases where C# is probably negligibly slower, and other cases where C++ is 5 or 10 times faster, especially in cases where operations can be easily SIMD'd.
A: I know it isn't what you were asking, but C# is often quicker to write than C++, which is a big bonus in a commercial setting.
A: > After all, the answers have to be somewhere, haven't they? :)
Umm, no.
As several replies noted, the question is under-specified in ways that invite questions in response, not answers. To take just one way:
*
*the question conflates language with language implementation - this C program is both 2,194 times slower and 1.17 times faster than this C# program - we would have to ask you: Which language implementations?
And then which programs? Which machine? Which OS? Which data set?
A: It really depends on what you're trying to accomplish in your code. I've heard that it's just stuff of urban legend that there is any performance difference between VB.NET, C# and managed C++. However, I've found, at least in string comparisons, that managed C++ beats the pants off of C#, which in turn beats the pants off of VB.NET.
I've by no means done any exhaustive comparisons in algorithmic complexity between the languages. I'm also just using the default settings in each of the languages. In VB.NET I'm using settings to require declaration of variables, etc. Here is the code I'm using for managed C++: (As you can see, this code is quite simple). I'm running the same in the other languages in Visual Studio 2013 with .NET 4.6.2.
#include "stdafx.h"
using namespace System;
using namespace System::Diagnostics;
bool EqualMe(String^ first, String^ second)
{
return first->Equals(second);
}
int main(array<String ^> ^args)
{
Stopwatch^ sw = gcnew Stopwatch();
sw->Start();
for (int i = 0; i < 100000; i++)
{
EqualMe(L"one", L"two");
}
sw->Stop();
Console::WriteLine(sw->ElapsedTicks);
return 0;
}
A: One area that I was instrumenting code in C++ vs C# was in creating a database connection to SQL Server and returning a resultset. I compared C++ (Thin layer over ODBC) vs C# (ADO.NET SqlClient) and found that C++ was about 50% faster than the C# code. ADO.NET is supposed to be a low-level interface for dealing with the database. Where you see perhaps a bigger difference is in memory consumption rather than raw speed.
Another thing that makes C++ code faster is that you can tune the compiler options at a granular level, optimizing things in a way you can't in C#.
A: Inspired by this, I did a quick test with 60 percent of common instruction needed in most of the programs.
Here’s the C# code:
for (int i=0; i<1000; i++)
{
StreamReader str = new StreamReader("file.csv");
StreamWriter stw = new StreamWriter("examp.csv");
string strL = "";
while((strL = str.ReadLine()) != null)
{
ArrayList al = new ArrayList();
string[] strline = strL.Split(',');
al.AddRange(strline);
foreach(string str1 in strline)
{
stw.Write(str1 + ",");
}
stw.Write("\n");
}
str.Close();
stw.Close();
}
String array and arraylist are used purposely to include those instructions.
Here's the c++ code:
for (int i = 0; i<1000; i++)
{
std::fstream file("file.csv", ios::in);
if (!file.is_open())
{
std::cout << "File not found!\n";
return 1;
}
ofstream myfile;
myfile.open ("example.txt");
std::string csvLine;
while (std::getline(file, csvLine))
{
std::istringstream csvStream(csvLine);
std::vector csvColumn;
std::string csvElement;
while( std::getline(csvStream, csvElement, ‘,’) )
{
csvColumn.push_back(csvElement);
}
for (std::vector::iterator j = csvColumn.begin(); j != csvColumn.end(); ++j)
{
myfile << *j << ", ";
}
csvColumn.clear();
csvElement.clear();
csvLine.clear();
myfile << "\n";
}
myfile.close();
file.close();
}
The input file size I used was 40 KB.
And here's the result -
*
*C++ code ran in 9 seconds.
*C# code: 4 seconds!!!
Oh, but this was on Linux... With C# running on Mono... And C++ with g++.
OK, this is what I got on Windows – Visual Studio 2003:
*
*C# code ran in 9 seconds.
*C++ code – horrible 370 seconds!!!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "308"
}
|
Q: Class structure pattern question. What should I choose? What are (if any)the implied assumptions or restrictions and the differences of designing like:
A) this:
class SampleClass1
{
IWorker workerA;
IWorker workerB;
void setWorkerA(IWorker w);
void setWorkerB(IWorker w);
WorkResult doWork();
}
B) versus this:
class SampleClass2
{
WorkResult doWork(IWorker workerA, IWorker workerB);
}
I know it depends on the specific project but what if the above class is a part of a small framework? The first Class is able to maintain state and separate the steps more naturaly but Second class ensures "real time communication" with the external caller more naturaly since Worker are passed each time doWork() is called.
Are there any recommended usages or generic practices that guide the choice between the two above ways? Thanks.
A: SampleClass1
*
*I may need to maintain state of the workers between doWork
*I might need the capability to set Workers individually. (doWork with 1 and 2, then with 2 and 3)
*I want to maintain the workers because it might be expected to run doWork multiple times on the same workers.
*I'm not a utility class. An instance of me is important.
SampleClass2
*
*Give me two workers and I will do work with them.
*I don't care who they are and I don't want to maintain them.
*It's someone else's job to maintain any pairing between workers.
*I may be more of a utility class. Maybe I can just be static.
A: In option (A) you are creating what is known as a Function Object or Functor, this is a design pattern that is well documented.
The two main advantages are:
*
*The workers can be set by in one place and then the object used elsewhere
*The object can retain state between calls
Also if you are using a dependency injection framework (Spring, Guice etc...) the functor can be automatically initialized and injected wherever required.
Function objects are extensively used in libraries e.g. the C++ Standard Template Library
A: Another option, a variant of case A, is the following:
class SampleClass3
{
SampleClass3( IWorker workerA, IWorker workerB );
WorkResult doWork();
}
Advantages:
*
*It's harder to make the object defective, since you are required to supply all the workers that are needed at construction time (in contrast to case A).
*You can still carry state inside SampleClass3 and/or one of the workers. (This is impossible in case B.)
Disadvantages:
*
*You have to have all your workers ready before you construct SampleClass3, instead of being able to provide them later. Of course, you could also provide the setters, so that they can be changed later.
A: If more than one method depends on IWorker a and IWorker b, I say do sample A.
If only doWork() uses both IWorker a and IWorker b, then do sample B.
Also, what is the real purpose of your SampleClass? doWork looks a bit like a utility method mroe than anything else.
A: A) is a bad design because it allows the object to be defective (one or both of the worker classes might not have been set).
B) can be good. Make it static though if you do not depend on the internal state of SampleClass2
A: Another Option:
IWorker class:
static WorkResult doWork(Iworker a, Iworker b);
A: IMO 2nd approach looks better, it requires caller to use less code to perform a task. 2nd approach is less error prone, caller don't need to worry that object might be not initialized completely.
A: How about instead defining a WorkDelegate (or alternatively an interface having a single doWork method without argument) that simply returns a WorkResult and letting individual classes decide how they implement it? This way, you don't confine yourself to premature decisions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do I close a connection early? I'm attempting to do an AJAX call (via JQuery) that will initiate a fairly long process. I'd like the script to simply send a response indicating that the process has started, but JQuery won't return the response until the PHP script is done running.
I've tried this with a "close" header (below), and also with output buffering; neither seems to work. Any guesses? or is this something I need to do in JQuery?
<?php
echo( "We'll email you as soon as this is done." );
header( "Connection: Close" );
// do some stuff that will take a while
mail( 'dude@thatplace.com', "okay I'm done", 'Yup, all done.' );
?>
A: It's necessary to send these 2 headers:
Connection: close
Content-Length: n (n = size of output in bytes )
Since you need know the size of your output, you'll need to buffer your output, then flush it to the browser:
// buffer all upcoming output
ob_start();
echo 'We\'ll email you as soon as this is done.';
// get the size of the output
$size = ob_get_length();
// send headers to tell the browser to close the connection
header('Content-Length: '.$size);
header('Connection: close');
// flush all output
ob_end_flush();
ob_flush();
flush();
// if you're using sessions, this prevents subsequent requests
// from hanging while the background process executes
if (session_id()) {session_write_close();}
/******** background process starts here ********/
Also, if your web server is using automatic gzip compression on the output (ie. Apache with mod_deflate), this won't work because actual size of the output is changed, and the Content-Length is no longer accurate. Disable gzip compression the particular script.
For more details, visit http://www.zulius.com/how-to/close-browser-connection-continue-execution
A: A better solution is to fork a background process. It is fairly straight forward on unix/linux:
<?php
echo "We'll email you as soon as this is done.";
system("php somestuff.php dude@thatplace.com >/dev/null &");
?>
You should look at this question for better examples:
PHP execute a background process
A: Assuming you have a Linux server and root access, try this. It is the simplest solution I have found.
Create a new directory for the following files and give it full permissions. (We can make it more secure later.)
mkdir test
chmod -R 777 test
cd test
Put this in a file called bgping.
echo starting bgping
ping -c 15 www.google.com > dump.txt &
echo ending bgping
Note the &. The ping command will run in the background while the current process moves on to the echo command.
It will ping www.google.com 15 times, which will take about 15 seconds.
Make it executable.
chmod 777 bgping
Put this in a file called bgtest.php.
<?php
echo "start bgtest.php\n";
exec('./bgping', $output, $result)."\n";
echo "output:".print_r($output,true)."\n";
echo "result:".print_r($result,true)."\n";
echo "end bgtest.php\n";
?>
When you request bgtest.php in your browser, you should get the following response quickly, without waiting about
15 seconds for the ping command to complete.
start bgtest.php
output:Array
(
[0] => starting bgping
[1] => ending bgping
)
result:0
end bgtest.php
The ping command should now be running on the server. Instead of the ping command, you could run a PHP script:
php -n -f largejob.php > dump.txt &
Hope this helps!
A: Here's a modification to Timbo's code that works with gzip compression.
// buffer all upcoming output
if(!ob_start("ob_gzhandler")){
define('NO_GZ_BUFFER', true);
ob_start();
}
echo "We'll email you as soon as this is done.";
//Flush here before getting content length if ob_gzhandler was used.
if(!defined('NO_GZ_BUFFER')){
ob_end_flush();
}
// get the size of the output
$size = ob_get_length();
// send headers to tell the browser to close the connection
header("Content-Length: $size");
header('Connection: close');
// flush all output
ob_end_flush();
ob_flush();
flush();
// if you're using sessions, this prevents subsequent requests
// from hanging while the background process executes
if (session_id()) session_write_close();
/******** background process starts here ********/
A: I'm on a shared host and fastcgi_finish_request is setup to exit scripts completely. I don't like the connection: close solution either. Using it forces a separate connection for subsequent requests, costing additional server resources. I read the Transfer-Encoding: cunked Wikipedia Article and learned that 0\r\n\r\n terminates a response. I haven't thoroughly tested this across browsers versions and devices, but it works on all 4 of my current browsers.
// Disable automatic compression
// @ini_set('zlib.output_compression', 'Off');
// @ini_set('output_buffering', 'Off');
// @ini_set('output_handler', '');
// @apache_setenv('no-gzip', 1);
// Chunked Transfer-Encoding & Gzip Content-Encoding
function ob_chunked_gzhandler($buffer, $phase) {
if (!headers_sent()) header('Transfer-Encoding: chunked');
$buffer = ob_gzhandler($buffer, $phase);
return dechex(strlen($buffer))."\r\n$buffer\r\n";
}
ob_start('ob_chunked_gzhandler');
// First Chunk
echo "Hello World";
ob_flush();
// Second Chunk
echo ", Grand World";
ob_flush();
ob_end_clean();
// Terminating Chunk
echo "\x30\r\n\r\n";
ob_flush();
flush();
// Post Processing should not be displayed
for($i=0; $i<10; $i++) {
print("Post-Processing");
sleep(1);
}
A: You can use Fast-CGI with PHP-FPM to use the fastcgi_end_request() function. In this way, you can continue to do some processing while the response has already been sent to the client.
*
*example of how to use fastcgi_finish_request() (Nov 2010)
You find this in the PHP manual here: FastCGI Process Manager (FPM); But that function specifically is not further documented in the manual. Here the excerpt from the PHP-FPM: PHP FastCGI Process Manager Wiki:
fastcgi_finish_request()
Scope: php function
Category: Optimization
This feature allows you to speed up implementation of some php queries. Acceleration is possible when there are actions in the process of script execution that do not affect server response. For example, saving the session in memcached can occur after the page has been formed and passed to a web server. fastcgi_finish_request() is a php feature, that stops the response output. Web server immediately starts to transfer response "slowly and sadly" to the client, and php at the same time can do a lot of useful things in the context of a query, such as saving the session, converting the downloaded video, handling all kinds of statistics, etc.
fastcgi_finish_request() can invoke executing shutdown function.
Note: fastcgi_finish_request() has a quirk where calls to flush, print, or echo will terminate the script early.
To avoid that issue, you can call ignore_user_abort(true) right before or after the fastcgi_finish_request call:
ignore_user_abort(true);
fastcgi_finish_request();
A: TL;DR Answer:
ignore_user_abort(true); //Safety measure so that the user doesn't stop the script too early.
$content = 'Hello World!'; //The content that will be sent to the browser.
header('Content-Length: ' . strlen($content)); //The browser will close the connection when the size of the content reaches "Content-Length", in this case, immediately.
ob_start(); //Content past this point...
echo $content;
//...will be sent to the browser (the output buffer gets flushed) when this code executes.
ob_end_flush();
ob_flush();
flush();
if(session_id())
{
session_write_close(); //Closes writing to the output buffer.
}
//Anything past this point will be ran without involving the browser.
Function Answer:
ignore_user_abort(true);
function sendAndAbort($content)
{
header('Content-Length: ' . strlen($content));
ob_start();
echo $content;
ob_end_flush();
ob_flush();
flush();
}
sendAndAbort('Hello World!');
//Anything past this point will be ran without involving the browser.
A: Complete version:
ignore_user_abort(true);//avoid apache to kill the php running
ob_start();//start buffer output
echo "show something to user";
session_write_close();//close session file on server side to avoid blocking other requests
header("Content-Encoding: none");//send header to avoid the browser side to take content as gzip format
header("Content-Length: ".ob_get_length());//send length header
header("Connection: close");//or redirect to some url: header('Location: http://www.google.com');
ob_end_flush();flush();//really send content, can't change the order:1.ob buffer to normal buffer, 2.normal buffer to output
//continue do something on server side
ob_start();
sleep(5);//the user won't wait for the 5 seconds
echo 'for diyism';//user can't see this
file_put_contents('/tmp/process.log', ob_get_contents());
ob_end_clean();
A: The following PHP manual page (incl. user-notes) suggests multiple instructions on how to close the TCP connection to the browser without ending the PHP script:
*
*Connection handling Docs
Supposedly it requires a bit more than sending a close header.
OP then confirms: yup, this did the trick: pointing to user-note #71172 (Nov 2006) copied here:
Closing the users browser connection whilst keeping your php script running has been an issue since [PHP] 4.1, when the behaviour of register_shutdown_function() was modified so that it would not automatically close the users connection.
sts at mail dot xubion dot hu Posted the original solution:
<?php
header("Connection: close");
ob_start();
phpinfo();
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
sleep(13);
error_log("do something in the background");
?>
Which works fine until you substitute phpinfo() for echo('text I want user to see'); in which case the headers are never sent!
The solution is to explicitly turn off output buffering and clear the buffer prior to sending your header information. Example:
<?php
ob_end_clean();
header("Connection: close");
ignore_user_abort(true); // just to be safe
ob_start();
echo('Text the user will see');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
// Do processing here
sleep(30);
echo('Text user will never see');
?>
Just spent 3 hours trying to figure this one out, hope it helps someone :)
Tested in:
*
*IE 7.5730.11
*Mozilla Firefox 1.81
Later on in July 2010 in a related answer Arctic Fire then linked two further user-notes that were-follow-ups to the one above:
*
*Connection Handling user-note #89177 (Feb 2009)
*Connection Handling user-note #93441 (Sep 2009)
A: You could try to do multithreading.
you could whip up a script that makes a system call ( using shell_exec ) that calls the php binary with the script to do your work as the parameter. But I don't think that is the most secure way. Maybe you can thighten stuff up by chrooting the php process and other stuff
Alternatively, there's a class at phpclasses that do that http://www.phpclasses.org/browse/package/3953.html. But I don't know the specifics of the implementation
A: Joeri Sebrechts' answer is close, but it destroys any existing content that may be buffered before you wish to disconnect. It doesn't call ignore_user_abort properly, allowing the script to terminate prematurely. diyism's answer is good but is not generically applicable. E.g. a person may have greater or fewer output buffers that that answer does not handle, so it may simply not work in your situation and you won't know why.
This function allows you to disconnect any time (as long as headers have not been sent yet) and retains the content you've generated so far. The extra processing time is unlimited by default.
function disconnect_continue_processing($time_limit = null) {
ignore_user_abort(true);
session_write_close();
set_time_limit((int) $time_limit);//defaults to no limit
while (ob_get_level() > 1) {//only keep the last buffer if nested
ob_end_flush();
}
$last_buffer = ob_get_level();
$length = $last_buffer ? ob_get_length() : 0;
header("Content-Length: $length");
header('Connection: close');
if ($last_buffer) {
ob_end_flush();
}
flush();
}
If you need extra memory, too, allocate it before calling this function.
A: Note for mod_fcgid users (please, use at your own risk).
Quick Solution
The accepted answer of Joeri Sebrechts is indeed functional. However, if you use mod_fcgid you may find that this solution does not work on its own. In other words, when the flush function is called the connection to the client does not get closed.
The FcgidOutputBufferSize configuration parameter of mod_fcgid may be to blame. I have found this tip in:
*
*this reply of Travers Carter and
*this blog post of Seumas Mackinnon.
After reading the above, you may come to the conclusion that a quick solution would be to add the line (see "Example Virtual Host" at the end):
FcgidOutputBufferSize 0
in either your Apache configuration file (e.g, httpd.conf), your FCGI configuration file (e.g, fcgid.conf) or in your virtual hosts file (e.g., httpd-vhosts.conf).
In (1) above, a variable named "OutputBufferSize" is mentioned. This is the old name of the FcgidOutputBufferSize mentioned in (2) (see the upgrade notes in the Apache web page for mod_fcgid).
Details & A Second Solution
The above solution disables the buffering performed by mod_fcgid either for the whole server or for a specific virtual host. This might lead to a performance penalty for your web site. On the other hand, this may well not be the case since PHP performs buffering on its own.
In case you do not wish to disable mod_fcgid's buffering there is another solution... you can force this buffer to flush.
The code below does just that by building on the solution proposed by Joeri Sebrechts:
<?php
ob_end_clean();
header("Connection: close");
ignore_user_abort(true); // just to be safe
ob_start();
echo('Text the user will see');
echo(str_repeat(' ', 65537)); // [+] Line added: Fill up mod_fcgi's buffer.
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // Strange behaviour, will not work
flush(); // Unless both are called !
// Do processing here
sleep(30);
echo('Text user will never see');
?>
What the added line of code essentially does is fill up mod_fcgi's buffer, thus forcing it to flush. The number "65537" was chosen because the default value of the FcgidOutputBufferSize variable is "65536", as mentioned in the Apache web page for the corresponding directive. Hence, you may need to adjust this value accordingly if another value is set in your environment.
My Environment
*
*WampServer 2.5
*Apache 2.4.9
*PHP 5.5.19 VC11, x86, Non Thread Safe
*mod_fcgid/2.3.9
*Windows 7 Professional x64
Example Virtual Host
<VirtualHost *:80>
DocumentRoot "d:/wamp/www/example"
ServerName example.local
FcgidOutputBufferSize 0
<Directory "d:/wamp/www/example">
Require all granted
</Directory>
</VirtualHost>
A: this worked for me
//avoid apache to kill the php running
ignore_user_abort(true);
//start buffer output
ob_start();
echo "show something to user1";
//close session file on server side to avoid blocking other requests
session_write_close();
//send length header
header("Content-Length: ".ob_get_length());
header("Connection: close");
//really send content, can't change the order:
//1.ob buffer to normal buffer,
//2.normal buffer to output
ob_end_flush();
flush();
//continue do something on server side
ob_start();
//replace it with the background task
sleep(20);
A: Your problem can be solved by doing some parallel programming in php. I asked a question about it a few weeks ago here: How can one use multi threading in PHP applications
And got great answers. I liked one in particular very much. The writer made a reference to the Easy Parallel Processing in PHP (Sep 2008; by johnlim) tutorial which can actually solve your problem very well as I have used it already to deal with a similar problem that came up a couple of days ago.
A: Ok, so basically the way jQuery does the XHR request, even the ob_flush method will not work because you are unable to run a function on each onreadystatechange. jQuery checks the state, then chooses the proper actions to take (complete,error,success,timeout). And although I was unable to find a reference, I recall hearing that this does not work with all XHR implementations.
A method that I believe should work for you is a cross between the ob_flush and forever-frame polling.
<?php
function wrap($str)
{
return "<script>{$str}</script>";
};
ob_start(); // begin buffering output
echo wrap("console.log('test1');");
ob_flush(); // push current buffer
flush(); // this flush actually pushed to the browser
$t = time();
while($t > (time() - 3)) {} // wait 3 seconds
echo wrap("console.log('test2');");
?>
<html>
<body>
<iframe src="ob.php"></iframe>
</body>
</html>
And because the scripts are executed inline, as the buffers are flushed, you get execution. To make this useful, change the console.log to a callback method defined in you main script setup to receive data and act on it. Hope this helps. Cheers, Morgan.
A: An alternative solution is to add the job to a queue and make a cron script which checks for new jobs and runs them.
I had to do it that way recently to circumvent limits imposed by a shared host - exec() et al was disabled for PHP run by the webserver but could run in a shell script.
A: If flush() function does not work. You must set next options in php.ini like:
output_buffering = Off
zlib.output_compression = Off
A: Latest Working Solution
// client can see outputs if any
ignore_user_abort(true);
ob_start();
echo "success";
$buffer_size = ob_get_length();
session_write_close();
header("Content-Encoding: none");
header("Content-Length: $buffer_size");
header("Connection: close");
ob_end_flush();
ob_flush();
flush();
sleep(2);
ob_start();
// client cannot see the result of code below
A: After trying many different solutions from this thread (after none of them worked for me), I've found solution on official PHP.net page:
function sendResponse($response) {
ob_end_clean();
header("Connection: close\r\n");
header("Content-Encoding: none\r\n");
ignore_user_abort(true);
ob_start();
echo $response; // Actual response that will be sent to the user
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
if (ob_get_contents()) {
ob_end_clean();
}
}
A: Couldn't get any of the above to work with IIS but:
With all its limitations, the built in PHP -S webserver comes to the rescue.
Caller script (IIS)
//limit of length required!
<?php
$s = file_get_contents('http://127.0.0.1:8080/test.php',false,null,0,10);
echo $s;
Worker script (built in webserber @ 8080 - beware single thread):
ob_end_clean();
header("Connection: close");
ignore_user_abort(true);
ob_start();
echo 'Text the user will see';
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // All output buffers must be flushed here
flush(); // Force output to client
// Do processing here
sleep(5);
file_put_contents('ts.txt',date('H:i:s',time()));
//echo('Text user will never see');
Ugly enough? :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "108"
}
|
Q: SHGetFolderPath() 32 bit vs 64 bit What happens if I use SHGetFolderPath api call in a 32 bit system with CSIDL_PROGRAM_FILESx86 folder id instead of the CSIDL_PROGRAM_FILES id?
Theoretically CSIDL_PROGRAM_FILESx86 should map to C:\program files (x86) in a 64 bit system but what does it map to in a 32 bit system where this path doesn't exist?
A: The different scenarios are described in this article on MSDN.
Scroll down to remarks, "FOLDERID_ProgramFiles"
The interpretation of certain KNOWNFOLDERID values depends on whether the folder is part of a 32-bit or 64-bit application and whether that application is running on a 32-bit or 64-bit operating system. If your application needs to distinguish between, for example, Program Files and Program Files (x86), you must use the right KNOWNFOLDERID for the situation.
The following tables summarize the KNOWNFOLDERID use in those cases.
FOLDERID_ProgramFiles
OS Application KNOWNFOLDERID Default Path CSIDL Equivalent
32 bit 32 bit FOLDERID_ProgramFiles %SystemDrive%\Program Files CSIDL_PROGRAM_FILES
32 bit 32 bit FOLDERID_ProgramFilesX86 %SystemDrive%\Program Files CSIDL_PROGRAM_FILESX86
32 bit 32 bit FOLDERID_ProgramFilesX64 (undefined) Not applicable Not applicable
64 bit 64 bit FOLDERID_ProgramFiles %SystemDrive%\Program Files CSIDL_PROGRAM_FILES
64 bit 64 bit FOLDERID_ProgramFilesX86 %SystemDrive%\Program Files (x86) CSIDL_PROGRAM_FILESX86
64 bit 64 bit FOLDERID_ProgramFilesX64 %SystemDrive%\Program Files None
64 bit 32 bit FOLDERID_ProgramFiles %SystemDrive%\Program Files (x86) CSIDL_PROGRAM_FILES
64 bit 32 bit FOLDERID_ProgramFilesX86 %SystemDrive%\Program Files (x86) CSIDL_PROGRAM_FILESX86
64 bit 32 bit FOLDERID_ProgramFilesX64 %SystemDrive%\Program Files None
Summary: It will point to %SystemDrive%\Program Files
A: Try this:
GetEnvironmentVariable("ProgramW6432", szBuffer, nBufferSize);
YMMV
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What are the equivalents of colored grep? Sometimes coloring a logfile or other gives a good overview when looking for stuff and behaviors
I just saw that grep have a coloring feature
grep -C 99999 --color <regexp> <filename>
What other methods are there?
A: There are many programs that support coloring like Colortail
Maybe this can be helpful too: GenericColouriser
A: Here is a snippet for a log coloring tool I sometimes use.
Note that is only works against stdin/stdout and in a terminal supporting ANSI colors.
#include <stdio.h>
#include <regex.h>
#define MAX_LINE 4096
#define RESET "\033[0m"
#define BLACK "\033[30m" /* Black */
#define RED "\033[31m" /* Red */
#define GREEN "\033[32m" /* Green */
#define YELLOW "\033[33m" /* Yellow */
#define BLUE "\033[34m" /* Blue */
#define MAGENTA "\033[35m" /* Magenta */
#define CYAN "\033[36m" /* Cyan */
#define WHITE "\033[37m" /* White */
#define BOLDBLACK "\033[1m\033[30m" /* Bold Black */
#define BOLDRED "\033[1m\033[31m" /* Bold Red */
#define BOLDGREEN "\033[1m\033[32m" /* Bold Green */
#define BOLDYELLOW "\033[1m\033[33m" /* Bold Yellow */
#define BOLDBLUE "\033[1m\033[34m" /* Bold Blue */
#define BOLDMAGENTA "\033[1m\033[35m" /* Bold Magenta */
#define BOLDCYAN "\033[1m\033[36m" /* Bold Cyan */
#define BOLDWHITE "\033[1m\033[37m" /* Bold White */
static int selected_color = 0;
static char *colors[] = {
"-green", GREEN,
"-black", BLACK,
"-red", RED,
"-yellow", YELLOW,
"-blue", BLUE,
"-magenta", MAGENTA,
"-cyan", CYAN,
"-white", WHITE,
"-boldgreen", BOLDGREEN,
"-boldblack", BOLDBLACK,
"-boldred", BOLDRED,
"-boldyellow", BOLDYELLOW,
"-boldblue", BOLDBLUE,
"-boldmagenta", BOLDMAGENTA,
"-boldcyan", BOLDCYAN,
"-boldwhite", BOLDWHITE,
NULL
};
/*----------------------------------------------------------------------*/
int main(int argc, char *argv[]) {
char buf[MAX_LINE];
int has_re = 0;
regex_t re;
if (argc > 1) {
if (argc > 2) {
int idx = 0;
while (colors[idx*2]) {
if (!strcmp(colors[idx*2], argv[1])) {
selected_color = idx;
break;
}
idx++;
}
if (regcomp(&re, argv[2], REG_EXTENDED | REG_NEWLINE)) {
printf("regcomp() failed!\n");
return -1;
}
} else if (regcomp(&re, argv[1], REG_EXTENDED | REG_NEWLINE)) {
printf("regcomp() failed!\n");
return -1;
}
has_re = 1;
} else {
printf("Usage: %s [ -red | -blue | -cyan | -white | -black | "
"-yellow | -magenta ] <regexp>\n", argv[0]);
return -1;
}
while (fgets(buf, MAX_LINE, stdin) == buf) {
char *bbuf = buf;
while (1) {
if (has_re) {
regmatch_t match[10];
if (regexec(&re, bbuf, re.re_nsub + 1, match, 0)) {
printf("%s", bbuf);
break;
} else {
int i, idx;
for (i=idx=0; i<1; i++) {
if (match[0].rm_so < 0) {
break;
} else {
printf("%.*s",
(int)(match[i].rm_so-idx),
bbuf+idx);
printf( "%s%.*s" RESET,
colors[selected_color*2+1],
(int)(match[i].rm_eo-match[i].rm_so),
bbuf+(int)match[i].rm_so);
idx = match[i].rm_eo;
bbuf += idx;
}
}
}
}
fflush(stdout);
}
}
if (has_re) {
regfree(&re);
}
return 0;
}
A: For searching source code, I use ack. It's got a lot of options that make sense for searching code (such as automatically ignoring SCM directories).
A: We use baretail, now if they added color to their baregrep, that would be nice.
A: This is an older question, but in case anyone is still looking, I recently created colorize, a tool which allows one to specify either fixed patterns or regular expressions to match with specific colors. It works out of the box with an intuitive syntax for specifying highlighting, and docopt as its only dependency.
colorize.py -f 'This is an interesting line=Blue' -f 'Different topic=Red' Input.log
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Simple toggle function with IE6 I'm facing a problem with IE6.
I took the toggle function from this website but when I apply it to my page I get the error (only in IE6):
Could not get the display property.
Invalid argument.
I can get the display property, but the error is thrown when I want to set to the new value.
EDIT:
I know that several developers have faced this problem. So if it can help: the table-row property is not managed by internet explorer 6!
In my case, even when I set the display property to '' I wasn't seeing anything but it was because I applied a class to my element that hide them on load, so the default display would be, hide and when you set the display property to '' IE set it to the default display.
A: Are you trying to set the display-property to "table-row" by any change? That is not supported by IE6.
A tip is to set display to an empty string. It makes the browser use the default value for the element.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Can you run a version 3 .Net binary on a version 2 CLR install? We're considering writing the next version of out project in using .Net 3, but are wondering if we can take the hit on forcing end users to install the .net framework version 3.
A: If you want to ensure you are only using .NET 2.0 compatible functionality then you should only use .NET 2.0 assemblies. Then you know your safe and sound.
A: This article from Jean-Baptiste Evain explains how you can use C# 3.0 and LINQ and targeting machines on which there is only .NET 2.0 runtime installed.
The idea is to use System.Core Mono implementation, which is licensed under the MIT/X11 license.
A: .NET 3.0 is a bad name for some new libraries: WPF, WCF, Workflow and InfoCard. The CLR is still version 2.0, ASP.NET is still version 2.0, and Windows Forms is still version 2.0.
In development, .NET 3.0 was called WinFX.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to programmatically unplug & replug an arbitrary USB device? I'm trying to fix a non-responsive USB device that's masquerading as a virtual COM port. Manual replugging works, but there may be up to 12 of these units. Is there an API command to do the programmatic equivalent of the unplug/replug cycle?
A: What about using Devcon.exe to "remove" and then "rescan"?
DR
A: You can use the C# Hardware Helper Lib and add the ResetDevice function.
public bool ResetDevice( IntPtr hDevInfo, IntPtr devInfoData )
// Need to add
// public const int DICS_PROPCHANGE = ((0x00000003));
// at the public class Native under //PARMS
int szOfPcp;
IntPtr ptrToPcp;
int szDevInfoData;
IntPtr ptrToDevInfoData;
Native.SP_PROPCHANGE_PARAMS pcp = new Native.SP_PROPCHANGE_PARAMS();
pcp.ClassInstallHeader.cbSize = Marshal.SizeOf(typeof(Native.SP_CLASSINSTALL_HEADER));
pcp.ClassInstallHeader.InstallFunction = Native.DIF_PROPERTYCHANGE;
pcp.StateChange = Native.DICS_PROPCHANGE; // for reset
pcp.Scope = Native.DICS_FLAG_CONFIGSPECIFIC;
pcp.HwProfile = 0;
szOfPcp = Marshal.SizeOf(pcp);
ptrToPcp = Marshal.AllocHGlobal(szOfPcp);
Marshal.StructureToPtr(pcp, ptrToPcp, true);
szDevInfoData = Marshal.SizeOf(devInfoData);
ptrToDevInfoData = Marshal.AllocHGlobal(szDevInfoData);
Marshal.StructureToPtr(devInfoData, ptrToDevInfoData, true);
bool rslt1 = Native.SetupDiSetClassInstallParams(hDevInfo, ptrToDevInfoData, ptrToPcp, Marshal.SizeOf(typeof(Native.SP_PROPCHANGE_PARAMS)));
bool rstl2 = Native.SetupDiCallClassInstaller(Native.DIF_PROPERTYCHANGE, hDevInfo, ptrToDevInfoData);
if (rslt1 && rstl2)
{
return true;
}
return false;
}
A: Unfortunately, there isn't one that I know of. Physically unplugging the USB connection does specific electronic things with pullup resistors, such that the device knows it's unplugged. I haven't encountered a host that attempts to be able to simulate this condition without physical unplugging.
A: Thought: under Device Manager, you can right-click your computer icon (top of the device tree) and "scan for changes". I'm not 100% sure, but I think if you "eject" a USB device (software "unplug" equivalent), then Scan for Hardware Changes, it will show back up even though it never actually left the port.
If I'm right about that, you might be able to use the Microsoft.Win32.Shell class to emulate opening Control Panel --> Administrative Tools --> Device Manager and running the context-menu item. It's worth a shot, anyway.
A: As Greg Hewgill said, I don't think that it's possible.
Initiation of the whole usb startup is triggered by the usb slave (in your case your device). The usb host (the pc) can send a message to the device to tell it to shut down, but once it's done that it's up to the device to start back up again. The host can't force it to.
To make matters worse you'll quite possibly find that the usb device is detecting the plug being inserted (by detecting the usb voltage on the power lines) to start up. This is particularly true of bus powered devices.
It sounds like there are differences from your situation and the case of trying to unmount/remount usb drives. When the usb drive is unmounted there is no reason that it can't stay enumerated on the pc. You're not actually reseting the usb drive, just making it's filesystem inactive.
A: I've looked at this for automated tests. The best solution we came up with seems to be the ability of USB hubs to disconnect devices when they draw too much power. From a USB pserspective, it appears the USB host may instruct a hub to do so. With 12 devices, you will have hubs, so I'd suggest to investigate that path.
A: I had to do this for my car computer project a while back. The touchscreen drivers didn't like going into hibernate and needed to be replugged when the computer came back from hibernate. The way I ended up solving it was to use Devcon.exe like DigitalRacer suggested. The trick however, was that remove/rescan on the controller didn't work. I had to do the remove/rescan on a HUB upstream from the device (which subsequently disconnected all attached devices).
A: Here's some hands on guidance:
http://digital.ni.com/public.nsf/allkb/1D120A90884C25AF862573A700602459
This is more hardcore:
http://support.microsoft.com/kb/311272
I'd say that using devcon.exe may solve some problems, not mine though. Suppose that you can build a box with arrays of USB-ports, where the power line is interrupted with FETs controlled by a MCU. The MCU should talk something basic and reliable, like RS-232. There might be an arduino board that simplifies the scary hardware work.
A: For windows 10 devices onward pnputil appears to be the best answer.
pnputil is installed in every build of windows.
Commands for restarting
pnputil /restart-device "device-instance-ID"
or
pnputil /disable-device "device-instance-ID"
pnputil /enable-device "device-instance-ID"
Find USB Device
with every usb device there are generally more than one device instance ID associated with it.
You need to get one instance ID first
*
*Open "Device Manager" window
*Go to the "Universal Serial Bus Controllers" drop down
*Right Click on one of the "USB Composite Devices" and select "Properties"
*Select the "Details" tab on the window
*Then on the property dropdown list select "Device Instance Path". This will show you a device instance ID.
*You'll need to plug and unplug your usb device, compare device instance IDs and by process of elimination determine which of the devices is the one you want.
Using this device instance ID you can now search for all device instance IDs by using the VID value
Device Instance ID Example
USB\VID_045E&PID_097D&MI_02\7&28580E27&0&0002
VID value for search
VID_045E
Powershell Command to search for all device instance IDs
Get-WmiObject Win32_PnPEntity | Where-Object {$_.PNPDeviceID -like "*VID_045E*"} | Select-Object Caption, PNPDeviceID
This should give a list of all device instance IDs associated with your USB device.
Selecting the correct device to restart isn't obvious and friendly warning you can cause some issues by selecting the wrong ID. But if you figure out the correct ID then this should work for you.
A: If you have more than one of these on any particular host machine, you might save some time/frustration by plugging them into their own dedicated USB hub out from the machine - at least it's only one cable to unplug/plug to restart a couple of devices at a time.
You've probably thought of that, of course. :-)
A: The device itself may be able to do this (ie, perform a USB disconnect/reconnect sequence).
Have you contacted the device manufacturer, or if you are the manufacturer, the EE's that designed it?
I had to do this when I designed a USB embedded device - programming could be accomplished through USB, but the device had to be able to disconnect and reconnect at several points to complete the process.
Beyond that there's the brute force method of disabling the USB host device in device manager (I assume this can be done in software) and then re-enabling it.
If nothing else, Phidget has USB controlled relay boards which you can use to connect power or the USB lines themselves to hubs or individual devices.
-Adam
A: Programmatically unmounting a USB drive can be done, however, I don't know if remounting can be done via code.
A: In Eject USB disks using C# (The Code Project) look for this:
CM_Request_Device_Eject function
This is the SetupApi function that
ejects a device (any device that can
be ejected). It takes a device
instance handle (or devInst) as
input...
A: We used this to programmable disconnect usb devices.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: ToolTipping a Drop Down list I have a drop down list in a GridView. The data inside the drop down list has variable length but the drop down list is of a fixed size. How can a tooltip be displayed over each item without selecting any item from the drop down list?
I have googled but come across samples where the tooltip is displayed over a currently selected item of drop down list. Can anybody in the SO community give me a hint?
A: I'm going to pressume that this will eventually have to work in IE?
If so, it isn't pretty. IE6 doesn't support the "title" attribute on OPTION elements.
There is an ugly, but potentially usable workaround listed here on the MSDN Internet Explorer forum threads:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3570987&SiteID=1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Catching a tab close event in web browser? Is there any way of knowing if the user closes a tab in a web browser? Specifically IE7, but also FireFox and others as well. I would like to be able to handle this situation from our asp code if the current tab containing our web site closes.
A: onbeforeunload also gets called on the following events -
* Close the current browser window.
* Navigate to another location by entering a new address or selecting a Favorite.
* Click the Back, Forward, Refresh, or Home button.
* Click an anchor that refers the browser to another Web page.
* Invoke the anchor.click method.
* Invoke the document.write method.
* Invoke the document.open method.
* Invoke the document.close method.
* Invoke the window.close method.
* Invoke the window.open method, providing the possible value _self for the window name.
* Invoke the window.navigate or NavigateAndFind method.
* Invoke the location.replace method.
* Invoke the location.reload method.
* Specify a new value for the location.href property.
* Submit a form to the address specified in the ACTION attribute via the INPUT type=submit control, or invoke the form.submit method.
So, if you are trying to log out the user if they close the tab or browser window, you would end up logging them out everytime they click a link or submit the page.
A: Attach an "onbeforeunload" event. It can execute code just before the browser/tab closes.
A: Does document.unload not do it if you?
A: If you need to know when the page is closed at the server side, your best bet is to ping the server periodically from the page (via XMLHttpRequest, for example). When pinging stops, the page is closed. This will also work if the browser crashed, was terminated or the computer was turned off.
A: As Eran Galperin suggested, use onbeforeunload. Particularly, return a string, and that string will be included in a confirmation dialog which will allow the user to choose whether or not to leave the page. Each browser includes some text before and after the string you return, so you should test in different browsers to see what a user would be presented with.
A: I'm sure the OP is asking from the context of the web page itself, but for any Firefox-addon-developers who come across this question, you can use the TabClose event. https://developer.mozilla.org/en/Code_snippets/Tabbed_browser#Notification_when_a_tab_is_added_or_removed
A: Santosh is right, this event will be triggered on many more actions than just clicking the close button of your tab/browser. I've created a little hack to prevent the onbeforeunload action to be triggered by adding the following function on document ready.
$(function () {
window.onbeforeunload = OnBeforeUnload;
$(window).data('beforeunload', window.onbeforeunload);
$('body').delegate('a', 'hover', function (event) {
if (event.type === 'mouseenter' || event.type === "mouseover")
window.onbeforeunload = null;
else
window.onbeforeunload = $(window).data('beforeunload');
});
});
Also, before you trigger any of the events mentioned by Santosh, you need to run this line:
window.onbeforeunload = null;
If you do not want the event to be triggered.
And here's the final code piece:
function OnBeforeUnload(oEvent) {
// return a string to show the warning message (not every browser will use this string in the dialog)
// or run the code you need when the user closes your page
return "Are you sure you want to close this window?";
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to sort columns in an ASP.NET GridView if using a custom DataSource? I can't get my GridView to enable a user to sort a column of data when I'm using a custom SqlDataSource.
I have a GridView in which the code in the ASP reference to it in the HTML is minimal:
<asp:GridView id="grid" runat="server" AutoGenerateColumns="False" AllowSorting="True">
</asp:GridView>
In the code-behind I attach a dynamically-created SqlDataSource (the columns it contains are not always the same so the SQL used to create it is constructed at runtime). For example:
I set up the columns...
BoundField column = new BoundField();
column.DataField = columnName;
column.HeaderText = "Heading";
column.SortExpression = columnName;
grid.Columns.Add(column);
the data source...
SqlDataSource dataSource = new SqlDataSource(
"System.Data.SqlClient",
connectionString,
generatedSelectCommand);
then the gridview...
grid.DataSource = dataSource;
grid.DataKeyNames = mylistOfKeys;
grid.DataBind();
At the moment nothing happens when a user clicks on a column heading when I'd expect it to sort the column data. Anyone any ideas what I'm missing?
If there's a nicer way of doing this that would be helpful too as this looks messy to me!
A: You could also just reassign the datasource.SelectCommand before the DataBind() call in the Sorting handler. Something like this:
protected void gvItems_Sorting(object sender, GridViewSortEventArgs e)
{
GridView gv = (GridView)sender;
SqlDataSource ds = (SqlDataSource)gv.DataSource;
ds.SelectCommand = ds.SelectCommand + " order by "
+ e.SortExpression + " " + GetSortDirection(e.SortDirection);
gvItems.DataSource = ds;
gvItems.DataBind();
}
string GetSortDirection(string sSortDirCmd)
{
string sSortDir;
if ((SortDirection.Ascending == sSortDirCmd))
{
sSortDir = "asc";
}
else
{
sSortDir = "desc";
}
return sSortDir;
}
I hope this help. Let me know if you need extra help to implement it.
Enjoy!
A: First you need to add an event:
<asp:GridView AllowSorting="True" OnSorting="gvName_Sorting" ...
Then that event looks like:
protected void gvName_Sorting( object sender, GridViewSortEventArgs e )
{
...
//rebind gridview
}
You basically have to get your data again.
You're right that it looks messy and there is a better way: ASP.Net MVC
Unfortunately that's a drastically different page model.
A: I'm not sure about this one, but if you use a standard SqlDataSource and you click on a field to sort according to that field, the SqlDataSource is populated again with the data and it is rebound to the grid. So the sorting does not happen on the client side and also can be done only when the selectmethod of the SQLDataSource is not DataReader.
When handling the sorting event, do you recreate the SqlDataSource and rebound it to the GridView? Can you put the sort field and direction to the generatedSelectCommand, which you use? Or put it to the SortParameterName property of the SQLDataSource?
I'm absolutely sure that you have to rebound the SqlDataSource to the grid, and since you create it on the fly, you have to populate it again.
A: Better late than never?
Some addition for Keith's suggestion which is basically the right one.
Truth is, that you have to deal with sorting on gridView_Sorting event.
There is no need to DataBind() the GridView earlier, for example in Page_Load event. There you should only call the GridView.Sort() method instead of .DataBind(). Here is how it goes:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Not IsPostBack Then
Me.gridView.Sort(Request.QueryString("sortExpression"), Request.QueryString("sortDirection"))
End If
End Sub
Next let's have a look on gridView_Sorting event.
There you have to push the datasource to the right sorting. GridView itself does not handle that (in this case at least).
Protected Sub gridView_Sorting(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewSortEventArgs) Handles gridView.Sorting
If IsPostBack Then
e.Cancel = True
Dim sortDir As SortDirection = SortDirection.Ascending
If e.SortExpression = Me.Q_SortExpression And Me.Q_SortDirection = SortDirection.Ascending Then
sortDir = SortDirection.Descending
End If
RedirectMe(e.SortExpression, sortDir)
Else
Dim sortExpr As String = e.SortExpression + " " + IIf(e.SortDirection = SortDirection.Ascending, "ASC", "DESC")
Dim dv As System.Data.DataView = Me.dsrcView.Select(New DataSourceSelectArguments(sortExpr))
Me.gridView.DataSource = dv
Me.gridView.DataBind()
End If
End Sub
No need to code any sorting functionality in data source like passing sort parameters to stored procedure. All sorting takes place in the above pieces of code.
Moreover, it's good to have the gridView.EnableViewState switched to False which causes the page to be much lighter for the network traffic and for the browser as well. Can do that as the grid is entirely recreated whenever the page is post back.
Have a nice day!
Martin
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: SQL Compact select top 1 While porting an application from SQL 2005 to SQL Server Compact Edition, I found that I need to port this command:
SELECT TOP 1 Id FROM tblJob WHERE Holder_Id IS NULL
But SQL Server Compact Edition doesn't support the TOP keyword. How can I port this command?
A: SELECT TOP(1) Id
FROM tblJob
WHERE Holder_Id IS NULL
Need the brackets as far as I know.
reference: http://technet.microsoft.com/en-us/library/bb686896.aspx
addition: likewise, only for version 3.5 onwards
A: This is slightly orthogonal to your question.
SQL Server Compact Edition actually doesn't perform very well with SQL queries. You get much better performance by opening tables directly. In .NET, you do this by setting the command object's CommandText property to the table name, and the CommandType property to CommandType.TableDirect.
If you want to filter the results, you will need an index on the table on the column(s) you want to filter by. Specify the index to use by setting the IndexName property and use SetRange to set the filter.
You can then read as many or as few records as you like.
A: I've used Fill method of SqlCEDataAdapter. You can do:
DbDataAdapter.Fill (DataSet, Int32, Int32, String) Adds or refreshes rows in a specified range in the DataSet to match those in the data source using the DataSet and DataTable names.
Supported by the .NET Compact Framework.
http://msdn.microsoft.com/en-ie/library/system.data.common.dbdataadapter.fill(v=VS.80).aspx
A: Looks like it can't be done in compact. You have to read all the jobs, or make a SqlReader, and just read the first one.
A: Well found a reason. Management studio carries and uses it's own version od SQL Server Compact. See more in http://en.wikipedia.org/wiki/SQL_Server_Compact.
SQL Server Management Studio 2005 can
read and modify CE 3.0 and 3.1
database files (with the latest
service pack), but the SQL Server
Management Studio 2008 from the
"Katmai" 2008 CTP release (or later)
is required to read version 3.5 files.
The RTM of SQL Server Management
Studio 2008 and Microsoft Visual
Studio Express 2008 SP1 can create,
modify and query CE 3.5 SP1 database
files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Landscape printing from HTML I have a HTML report, which needs to be printed landscape because of the many columns. It there a way to do this, without the user having to change the document settings?
And what are the options amongst browsers.
A: You might be able to use the CSS 2 @page rule which allows you to set the 'size' property to landscape.
A: You can also use the non-standard IE-only css attribute writing-mode
div.page {
writing-mode: tb-rl;
}
A: In your CSS you can set the @page property as shown below.
@media print{@page {size: landscape}}
The @page is part of CSS 2.1 specification however this size is not as highlighted by the answer to the question Is @Page { size:landscape} obsolete?:
CSS 2.1 no longer specifies the size attribute. The current working
draft for CSS3 Paged Media module does specify it (but this is not
standard or accepted).
As stated the size option comes from the CSS 3 Draft Specification. In theory it can be set to both a page size and orientation although in my sample the size is omitted.
The support is very mixed with a bug report begin filed in firefox, most browsers do not support it.
It may seem to work in IE7 but this is because IE7 will remember the users last selection of landscape or portrait in print preview (only the browser is re-started).
This article does have some suggested work arounds using JavaScript or ActiveX that send keys to the users browser although it they are not ideal and rely on changing the browsers security settings.
Alternately you could rotate the content rather than the page orientation. This can be done by creating a style and applying it to the body that includes these two lines but this also has draw backs creating many alignment and layout issues.
<style type="text/css" media="print">
.page
{
-webkit-transform: rotate(-90deg);
-moz-transform:rotate(-90deg);
filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=3);
}
</style>
The final alternative I have found is to create a landscape version in a PDF. You can point to so when the user selects print it prints the PDF. However I could not get this to auto print work in IE7.
<link media="print" rel="Alternate" href="print.pdf">
In conclusion in some browsers it is relativity easy using the @page size option however in many browsers there is no sure way and it would depend on your content and environment.
This maybe why Google Documents creates a PDF when print is selected and then allows the user to open and print that.
A: My solution:
<style type="text/css" media="print">
@page {
size: landscape;
}
body {
writing-mode: tb-rl;
}
</style>
*
*With media="print" will apply only on Print.
*This works in IE, Firefox and Chrome
A: I created a blank MS Document with Landscape setting and then opened it in notepad. Copied and pasted the following to my html page
<style type="text/css" media="print">
@page Section1
{size:11 8.5in;
margin:.5in 13.6pt 0in 13.6pt;
mso-header-margin:.5in;
mso-footer-margin:.5in;
mso-paper-source:4;}
div.Section1
{page:Section1;}
</style>
<div class="Section1"> put text / images / other stuff </div>
The print preview shows the pages in a landscape size. This seems to be working fine on IE and Chrome, not tested on FF.
A: I tried Denis's answer and hit some problems (portrait pages didn't print properly after going after landscape pages), so here is my solution:
body {
margin: 0;
background: #CCCCCC;
}
div.page {
margin: 10px auto;
border: solid 1px black;
display: block;
page-break-after: always;
width: 209mm;
height: 296mm;
overflow: hidden;
background: white;
}
div.landscape-parent {
width: 296mm;
height: 209mm;
}
div.landscape {
width: 296mm;
height: 209mm;
}
div.content {
padding: 10mm;
}
body,
div,
td {
font-size: 13px;
font-family: Verdana;
}
@media print {
body {
background: none;
}
div.page {
width: 209mm;
height: 296mm;
}
div.landscape {
transform: rotate(270deg) translate(-296mm, 0);
transform-origin: 0 0;
}
div.portrait,
div.landscape,
div.page {
margin: 0;
padding: 0;
border: none;
background: none;
}
}
<div class="page">
<div class="content">
First page in Portrait mode
</div>
</div>
<div class="page landscape-parent">
<div class="landscape">
<div class="content">
Second page in Landscape mode (correctly shows horizontally in browser and prints rotated in printer)
</div>
</div>
</div>
<div class="page">
<div class="content">
Third page in Portrait mode
</div>
</div>
A: The size property is what you're after as mentioned. To set both the the orientation and size of your page when printing, you could use the following:
/* ISO Paper Size */
@page {
size: A4 landscape;
}
/* Size in mm */
@page {
size: 100mm 200mm landscape;
}
/* Size in inches */
@page {
size: 4in 6in landscape;
}
Here's a link to the @page documentation.
A: Here's what I came up with - add a negative rotation to the <html> element and a positive rotation of equal abs value to the <body>. That saved having to add a ton of CSS to style the body, and it worked like a charm:
html {
transform: rotate(-90deg);
}
body {
transform: rotate(90deg);
}
A: It's not enough just to rotate the page content. Here is a right CSS which work in the most browsers (Chrome, Firefox, IE9+).
First set body margin to 0, because otherwise page margins will be larger than those you set in the print dialog. Also set background color to visualize pages.
body {
margin: 0;
background: #CCCCCC;
}
margin, border and background are required to visualize pages.
padding must be set to the required print margin. In the print dialog you must set the same margins (10mm in this example).
div.portrait, div.landscape {
margin: 10px auto;
padding: 10mm;
border: solid 1px black;
overflow: hidden;
page-break-after: always;
background: white;
}
The size of A4 page is 210mm x 297mm. You need to subtract print margins from the size. And set the size of page's content:
div.portrait {
width: 190mm;
height: 276mm;
}
div.landscape {
width: 276mm;
height: 190mm;
}
I use 276mm instead of 277mm, because different browsers scale pages a little bit differently. So some of them will print 277mm-height content on two pages. The second page will be empty. It's more safe to use 276mm.
We don't need any margin, border, padding, background on the printed page, so remove them:
@media print {
body {
background: none;
-ms-zoom: 1.665;
}
div.portrait, div.landscape {
margin: 0;
padding: 0;
border: none;
background: none;
}
div.landscape {
transform: rotate(270deg) translate(-276mm, 0);
transform-origin: 0 0;
}
}
Note that the origin of transformation is 0 0! Also the content of landscape pages must be moved 276mm down!
Also if you have a mix of portrait and lanscape pages IE will zoom out the pages. We fix it by setting -ms-zoom to 1.665. If you'll set it to 1.6666 or something like this the right border of the page content may be cropped sometimes.
If you need IE8- or other old browsers support you can use -webkit-transform, -moz-transform, filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=3). But for modern enough browsers it's not required.
Here is a test page:
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<style>
...Copy all styles here...
</style>
</head>
<body>
<div class="portrait">A portrait page</div>
<div class="landscape">A landscape page</div>
</body>
</html>
A: Try to add this your CSS:
@page {
size: landscape;
}
A: Quoted from CSS-Discuss Wiki
The @page rule has been cut down in
scope from CSS2 to CSS2.1. The full
CSS2 @page rule was reportedly
implemented only in Opera (and buggily
even then). My own testing shows that
IE and Firefox don't support @page at
all. According to the now-obsolescent
CSS2 spec section 13.2.2 it is
possible to override the user's
setting of orientation and (for
example) force printing in Landscape
but the relevant "size" property has
been dropped from CSS2.1, consistent
with the fact that no current browser
supports it. It has been reinstated in
the CSS3 Paged Media module but note
that this is only a Working Draft (as
at July 2009).
Conclusion: forget
about @page for the present. If you
feel your document needs to be printed
in Landscape orientation, ask yourself
if you can instead make your design
more fluid. If you really can't
(perhaps because the document contains
data tables with many columns, for
example), you will need to advise the
user to set the orientation to
Landscape and perhaps outline how to
do it in the most common browsers. Of
course, some browsers have a print
fit-to-width (shrink-to-fit) feature
(e.g. Opera, Firefox, IE7) but it's
inadvisable to rely on users having
this facility or having it switched
on.
A: I tried to solve this problem once, but all my research led me towards ActiveX controls/plug-ins. There is no trick that the browsers (3 years ago anyway) permitted to change any print settings (number of copies, paper size).
I put my efforts into warning the user carefully that they needed to select "landscape" when the browsers print dialog appeared. I also created a "print preview" page, which worked much better than IE6's did! Our application had very wide tables of data in some reports, and the print preview made it clear to the users when the table would spill off the right-edge of the paper (since IE6 couldnt cope with printing on 2 sheets either).
And yes, people are still using IE6 even now.
A: <style type="text/css" media="print">
.landscape {
width: 100%;
height: 100%;
margin: 0% 0% 0% 0%; filter: progid:DXImageTransform.Microsoft.BasicImage(Rotation=1);
}
</style>
If you want this style to be applied to a table then create one div tag with this style class and add the table tag within this div tag and close the div tag at the end.
This table will only print in landscape and all other pages will print in portrait mode only. But the problem is if the table size is more than the page width then we may loose some of the rows and sometimes headers also are missed. Be careful.
Have a good day.
Thank you,
Naveen Mettapally.
A: -webkit-transform: rotate(-90deg); -moz-transform:rotate(-90deg);
filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=3);
not working in Firefox 16.0.2 but it is working in Chrome
A: This also worked for me:
@media print and (orientation:landscape) { … }
A: The problem I faced is probably the same you have. Everyone here is using CSS to provide it statically, but I had to look for a dynamic solution so that it would change based on the active element without reloading the page..
I created 2 files, portrait.css and landscape.css.
portrait.css is blank, but landscape.css has one line.
@media print{@page {size: landscape}}
in my primary file, I added this line of html to specify portrait.css as default.
<link rel="stylesheet" id="PRINTLAYOUT" href="portrait.css" type="text/css" /></link>
Now, to switch you only have to change href in the element to switch printing modes.
$("#PRINTLAYOUT").attr("href","landscape.css")
// OR
document.getElementById("PRINTLAYOUT").href = "landscape.css" // I think...
This worked great for me, and I hope it helps someone else doing things the hard way like me.. As a note, $ represents JQuery.. If you are not using this yet, I highly recommend you start now.
A: If you are using React and libraries like MUI, using plain CSS in your React app is not a good practice. The better approach will be to use a style component called GlobalStyles, which we can import from Material UI.
The code will look like this,
import { GlobalStyles } from '@mui/material';
const printStyle = {
['@media print']: {
['@page']: {
size: 'landscape',
margin: '2px',
},
},
};
You might not need to use @page inside the @media print because @page is only for printing. Documentation
The margin will eliminate the URLs, the browser generates while printing.
We can use the GlobalStyles in our App container. Like this
const App: React.FC = () => (
<>
<GlobalStyles styles={printStyle} />
<AppView />
</>
);
It will apply the above CSS whenever we call windows.print().
If you are using other libraries besides MUI, there should be some components or plugins that you can use to apply the CSS globally.
A: You can try the following:
@page {
size: auto
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "295"
}
|
Q: How to determine from within Java which .NET framework is installed From within my Java program I want to determine which .NET Framework is installed on the system. What is the best (and easiest) way to do this?
Answer Thanks scubabbl! It worked to check the directory System.getenv( "WINDIR" ) + "\\Microsoft.NET\\Framework" for its directories starting with the letter "v".
A: From what I understand, the actual file structure in c:\windows\Microsoft.Net\Framework has folders with versions of .Net installed. On my computer, I have folders up to v3.5, or
c:\windows\Microsoft.Net\Framework\v3.5.
There are lots of issues with this, including security issues though.
The second, and probably better answer would be to check the windows registry.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP
The Version key will have the value you are looking for.
Edit: stackoverflow question regarding reading the registry with java.
read/write to Windows Registry using Java
This library http://www.trustice.com/java/jnireg/ will allow you to read the registry.
A: Checking the directory structure is not the best way to do this. Take a look at this thread for the full details on the registry keys you need to evaluate.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's the best way to detect web applications attacks? What is the best way to survey and detect bad users behavior or attacks like deny of services or exploits on my web app ?
I know server's statistics (like Awstats) are very useful for that kind of purpose, specially to see 3XX, 4XX and 5XX errors (here's an Awstats example page) which are often bots or bad intentioned users that try well-known bad or malformed URLs.
Is there others (and betters) ways to analyze and detect that kind of attack tentative ?
Note : I'm speaking about URL based attacks, not attacks on server's component (like database or TCP/IP).
A: Log everything. Then examine the logs by hand, and find things that are uninteresting and write a parser that discards those log entries. Once you've done that, rinse and repeat until you're left with just the interesting things. Now that you have only interesting log entries to read, decide which ones are dangerous and which ones are harmless but annoying, and fix as appropriate.
A: If you have the budget, go with a Web Application Firewall (WAF). These are built specifically for recognizing and blocking application-layer attacks. There are also some cheap WAFs, even an open-source one or two.
Note however that you should still practice secure coding etc; a WAF is great for defense in depth, and temporary virtual patching.
A: I usually write my own log analyzer, that tries to follow the events that usually happens when the navigation is done by shomething NOT humans. Like:
Direct access to pages with URL or parameters unknown
Feedback forms loaded, compiled and posted in less than, say, 10 seconds
Wrong referrer sequences
HTML or "critical" character sequences in posted fields
And so on...
A: First you have to say what is or is not a potential exploit, sometimes a url may be a valid request and sometimes it may be a XSS attack. A lot of traffic may be a DDoS or it may be a result of being mentioned on a slashdot article.
Next, you can view logs for various types of attack - such as DDoS, which you'll want to check using IP tools (as a lot of DDoS attacks are made on non-web ports, such as SYN floods).
Then you want to install mod_security and set up some rules for it (you can find a lot of pre-defined rulesets on the web). This reads the request and parses it for common or known attacks (such as urls that contain sql or html type text).
A: More network as a whole but SATAN is very good
http://www.porcupine.org/satan/
SATAN is a tool to help systems administrators. It recognizes several common networking-related security problems, and reports the problems without actually exploiting them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to figure out the current Biztalk Host Process I would like to know at runtime in which of several possible host processes my current code is executing. The host processes have configured names at the Biztalk management level, but I need to know which process I'm in from inside the code.
I didn't find any supported way to do this and I'm even stuck with the search for an unsupported method :-)
Edit: Thanks to tomasr for the answer. I would need that mainly for logging/tracing purposes. Not only to display the host the ode is running in but also to determine the appropriate trace level.
That also means, I need this deep down on library level and it has to be fast. I can't go and get the call stack for example to find out which top-level-component (orchestration, pipeline, whatever) the code is running in.
On the other hand I could figure it out just once in a singleton constructor. That would be called once per AppDomain and thus could take a little while to look things up. But I probably don't have a very meaningful call stack there, so we are back on square one :-/
Edit2: The Biztalk Management DB must contain the information I need. It knows which hosts are started on which server and (probably) the process id's of these host processes. If someone has a pointer where I could start looking there, that would help me, too.
A: As far as I know, there isn't any "simple" way of doing it. One (somewhat backwards) way of getting that info would be to use the ExplorerOM API to query the management configuration data and figure out where you're executing and what host/handler you're running in.
For example, if you're doing this from a PipelineComponent, you could look for the port name in the message context and then look it up using ExplorerOM. Then, use that to query the Handler associated with the adapter and from there get the host name. Would be something similar for the orchestration, though you'd look for the full orchestration name.
Out of curiosity, why do you need this information?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SQL Reporting services - Out of Memory Exception I am having some 10 lac records in my single SQL Table. I need to load this much record in my record. I need to know whether this will load. when i tried loading to report its showing out of memory exception.
A: Reporting Services (and Cognos, Business Objects, and other BI reporting suites) generally have problems rendering reports that have hundreds of thousands of records or millions of records in the OUTPUT. Most of these systems don't have much of a problem aggregating the data into tens of thousands of records, but once you start going into the hundreds of thousands or millions, you will run into memory errors.
My recommendation is to NOT use Reporting Services for reports that are hundreds of thousands of rows. No person is going to read all the lines in the report. Heck, most of the BI suites won't even output the report if you try to render to Excel due to the 65,556 row limitation. I would recommend using SSIS for large raw data dumps, Analysis Services cubes if you want to allow the user to do exploratory ad hoc slice and dice analysis in Excel, or find ways to break it into smaller, more relevant data that can be consumed by a human -- meaning aggregated or filtered to a few hundred or thousand rows.
If you MUST use reporting services and you want to use it as a tool to get the data into Excel, then you could try rendering to CSV via a subscription. Again, I would recommend just building a SSIS package that does this instead since you won't have memory issues outputting multi-million row CSV files. But if you MUST use reporting services as the output tool, then minimize the memory cost by going with the least memory intensive rendering method.
A: This is impossible to answer unless you expand your question. What language are you using? Which report generating framwork? How does the SQL query look like?
Edit: Ah, ok, Microsoft SQL Reporting Services. Well, it should easily handle queries on tables with millions of tuples, I'm sure. It all depends on how you have structured your query, so until you give us that we can't help you.
A: Are you trying to display tens of thousands of records? What user would ever read that? Have you tried scheduling and emailing the report?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there any way to create a stand-alone ButtonSpec in Krypton Toolkit? I need to create a button that has the same style as ButtonSpec with its type set to Context, is there any way to create a ButtonSpec that isn't directly attached to another control, or have I missed a simple style option on another control?
A: You cannot get a standalone ButtonSpec but you do not need one. Create a KryptonButton and then set the ButtonStyle to be ButtonSpec and it will draw in the same way as a ButtonSpec that is present in other controls. You could use a KryptonDropButton if you need it to show a KryptonContextMenu when pressed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to convert a Unicode character to its ASCII equivalent Here's the problem:
In C# I'm getting information from a legacy ACCESS database. .NET converts the content of the database (in the case of this problem a string) to Unicode before handing the content to me.
How do I convert this Unicode string back to it's ASCII equivalent?
Edit
Unicode char 710 is indeed MODIFIER LETTER CIRCUMFLEX ACCENT. Here's the problem a bit more precise:
-> (Extended) ASCII character ê (Extended ASCII 136) was inserted in the database.
-> Either Access or the reading component in .NET converted this to U+02C6 U+0065
(MODIFIER LETTER CIRCUMFLEX ACCENT + LATIN SMALL LETTER E)
-> I need the (Extended) ASCII character 136 back.
Here's what I've tried (I see now why this did not work...):
string myInput = Convert.ToString(Convert.ToChar(710));
byte[] asBytes = Encoding.ASCII.GetBytes(myInput);
But this does not result in 94 but a byte with value 63...
Here's a new try but it still does not work:
byte[] bytes = Encoding.ASCII.GetBytes("ê");
Soltution
Thanks to both csgero and bzlm for pointing in the right direction I solved the problem here.
A: You cannot use the default ASCII encoding (Encoding.ASCII) here, but must create the encoding with the appropriate code page using Encoding.GetEncoding(...). You might try to use code page 1252, which is a superset of ISO 8859-1.
A: ASCII does not define ê; the number 136 comes from the number for the circumflex in 8-bit encodings such as Windows-1252.
Can you verify that a small e with a circumflex (ê) is actually what is supposed to be stored in the Access database in this case? Perhaps U+02C6 U+0065 is the result of a conversion error, where the input is actually an e followed by a circumflex, or something else entirely. Perhaps your Access database has corrupt data in the sense that the designated encoding does not match the contents, in which case the .NET client might incorrectly parse the data (using the wrong decoder).
If this error is indeed introduced during the reading from the database, perhaps pasting some code or configuration settings might help.
In Code page 437, character number 136 is an e with a circumflex.
A: Okay, let's elaborate. Both csgero and bzlm pointed in the right direction.
Because of blzm's reply I looked up the Windows-1252 page on wiki and found that it's called a codepage. The wikipedia article for Code page which stated the following:
No formal standard existed for these ‘extended character sets’; IBM merely referred to the variants as code pages, as it had always done for variants of EBCDIC encodings.
This led me to codepage 437:
n ASCII-compatible code pages, the lower 128 characters maintained their standard US-ASCII values, and different pages (or sets of characters) could be made available in the upper 128 characters. DOS computers built for the North American market, for example, used code page 437, which included accented characters needed for French, German, and a few other European languages, as well as some graphical line-drawing characters.
So, codepage 437 was the codepage I was calling 'extended ASCII', it had the ê as character 136 so I looked up some other chars as well and they seem right.
csgero came with the Encoding.GetEncoding() hint, I used it to create the following statement which solves my problem:
byte[] bytes = Encoding.GetEncoding(437).GetBytes("ê");
A: Hmm … I'm not sure which character you mean. The caret (“^”, CIRCUMFLEX ACCENT) has the same code in ASCII and Unicode (U+005E).
/EDIT: Damn, my fault. 710 (U+02C6) is actually the MODIFIER LETTER CIRCUMFLEX ACCENT. Unfortunately, this character isn't part of ASCII at all. It might look like the normal caret but it's a different character. Simple conversion won't help here. I'm not sure if .NET supports mapping of similar characters when converting from Unicode. Worth investigating, though.
A: The value 63 is the question mark, AKA "I am not able to display this character in ASCII".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Javascript: How to reload entire frameset onload() event including frame where Javascript is called Is there a way to reload an entire frameset using Javascript onload() event?
function logout() {
/* ... */
// reload entire frame
top.location.reload();
}
<body onload="logout()">
This cause all frames to reload but the URL of the frame where this was called didn't changed to the URL specified in the framset.
A: As I understood it, you want to reload each frame in a frameset using the original URL as stated in <frame src="...">.
This little function can do that (put it into the document holding the frameset):
this.reloadChildFrames = function()
{
var allFrames = document.getElementsByTagName("frame");
for (var i = 0; i < allFrames.length; i++)
{
var f = allFrames[i];
f.contentDocument.location = f.src;
}
}
You are then able to call that function from within any child frame:
top.reloadChildFrames()
Of course, this can only work when all frames come from the same origin.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: merge custom skin & custom class into SWC If I have an *.as file that is my custom component, a SWC class that contains the custom components skins and a css file that tells the custom class how it should look (references the SWC skin file), how do I set up a library project (using flexbuilder 3) to generate a single SWC file that will contain everything?
A: sorry to answer my own question, but I found the answer to be that I can ignore the CSS file.
Make sure that the SWC with the skins is located in the library path of your library project, then just reference it by using the embed metadata tag.
e.g.
[Embed(skinClass="My_Slider_trackSkin")]
private var trackSkin : Class;
then just use set style and then when you load the component in another project it will default to the correct skin.
this.setStyle('trackSkin', trackSkin);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Get address of current page in Internet Explorer from toolbar I'm trying to wrap my head around creating a toolbar (a tool band in a rebar) in MFC for Internet Explorer using COM.
Is it possible to get the address of the currently viewed page (i.e., https://stackoverflow.com/questions/ask in my case :-) ) from the toolbar?
If so, what should I look in to?
Thanks!
A: You can use the IWebBrowser2::get_LocationURL method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using own exceptions in wse, not only SoapException Is it possible to send my own developed exceptions over Soap to a client using http.sys??
A: To the best of my knowledge unfortunately the answer is no. You cannot build your own custom exceptions on the server side and expect to use them on the client side through WSE. I can't give much technical background as to why (as in why this is not allowed by WSE), but I am sure about my answer because I tested this out.
You can use the approach described in the provided link to return a custom exception that inherits from a System.Web.Services.Protocols.SoapException, however you must capture the exception on the client side as a SoapException since you will not be able to capture it as the custom exception type: http://msdn.microsoft.com/en-us/library/ms229064.aspx
To recreate the test for your own confirmation do the following:
*
*Create a test exception class, call it whatever you'd like and make sure it follows the pattern described in the link I provided above (there are code samples provided).
*Create a web method that explicitly returns the test exception like so:
'This is in VB.Net
<WebMethod()> _
Public Function ThrowTestSoapException() As TestSoapException
Return New TestSoapException()
End Function
*Try to regenerate your client's WSE Library (using WseWsdl3.exe) and you should receive an error message like this one: "Error: Server unavailable, please try later"
That is as far as I could get when trying to create my own transferable Custom Exceptions. Again the only thing I was able to do was return a Custom Exception that inherited from the SoapException class, and caught it on the client side as a SoapException. That is the same method described in the link that "CheGueVerra" pointed out above.
In response to John Saunders's comment above: Yes, if possible move over to WCF, WSE is indeed obsolete. Since this is work related for me and for others asking these questions, making a shift from WSE to WCF would require managerial approval - so some of us cannot make those changes easily - even if we desperately want to.
A: Yes, you can throw your own exceptions. Any uncaught exception that does not derive from SoapException will be bottled up by the .NET framework into a SoapException. You can derive from SoapException if you want to control how certain parts of the SoapException are formed (for instance, the fault and detail portions).
A: Like the ASMX services it is based on, WSE has little support for SOAP Faults. You can return them (SoapException with Detail property set), but they won't appear in the WSDL. If received by a WSE client, they will appear as SoapException, not as a custom exception type.
WCF does have full support for SOAP faults, both on the client and the server side.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to do a SQL NOT NULL with a DateTime? How does one handle a DateTime with a NOT NULL?
I want to do something like this:
SELECT * FROM someTable WHERE thisDateTime IS NOT NULL
But how?
A: erm it does work? I've just tested it?
/****** Object: Table [dbo].[DateTest] Script Date: 09/26/2008 10:44:21 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[DateTest](
[Date1] [datetime] NULL,
[Date2] [datetime] NOT NULL
) ON [PRIMARY]
GO
Insert into DateTest (Date1,Date2) VALUES (NULL,'1-Jan-2008')
Insert into DateTest (Date1,Date2) VALUES ('1-Jan-2008','1-Jan-2008')
Go
SELECT * FROM DateTest WHERE Date1 is not NULL
GO
SELECT * FROM DateTest WHERE Date2 is not NULL
A: Just to rule out a possibility - it doesn't appear to have anything to do with the ANSI_NULLS option, because that controls comparing to NULL with the = and <> operators. IS [NOT] NULL works whether ANSI_NULLS is ON or OFF.
I've also tried this against SQL Server 2005 with isql, because ANSI_NULLS defaults to OFF when using DB-Library.
A: I faced this problem where the following query doesn't work as expected:
select 1 where getdate()<>null
we expect it to show 1 because getdate() doesn't return null.
I guess it has something to do with SQL failing to cast null as datetime and skipping the row!
of course we know we should use IS or IS NOT keywords to compare a variable with null but when comparing two parameters it gets hard to handle the null situation.
as a solution you can create your own compare function like the following:
CREATE FUNCTION [dbo].[fnCompareDates]
(
@DateTime1 datetime,
@DateTime2 datetime
)
RETURNS bit
AS
BEGIN
if (@DateTime1 is null and @DateTime2 is null) return 1;
if (@DateTime1 = @DateTime2) return 1;
return 0
END
and re writing the query like:
select 1 where dbo.fnCompareDates(getdate(),null)=0
A: SELECT * FROM Table where codtable not in (Select codtable from Table where fecha is null)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Generating a report by date range in rails How would you go about producing reports by user selected date ranges in a rails app? What are the best date range pickers?
edit in response to patrick : I am looking for a bit of both widget and active record advice but what I am really curious about is how to restfully display a date ranged list based on user selected dates.
A: Are we asking an interface question here (i.e. you want a widget) or an ActiveRecord question?
Date Picking Widgets
1) Default Rails Solution: See date_select documentation here.
2) Use a plugin : Why write code? I personally like the CalendarDateSelect plugin, using a pair of the suckers when I need a range.
3) Adapt a Javascript widget to Rails: It is almost trivial to integrate something like the Yahoo UI library (YUI) Calendar, which is all Javascript, to Rails. From the perspective of Rails its just another way to populate the params[:start_date] and params[:end_date]. YUI Calendar has native support for ranges.
Getting the data from the Widgets
1) Default Rails Solution See date_select documentation here.
#an application helper method you'll find helpful
#credit to http://blog.zerosum.org/2007/5/9/deconstructing-date_select
# Reconstruct a date object from date_select helper form params
def build_date_from_params(field_name, params)
Date.new(params["#{field_name.to_s}(1i)"].to_i,
params["#{field_name.to_s}(2i)"].to_i,
params["#{field_name.to_s}(3i)"].to_i)
end
#goes into view
<%= date_select "report", "start_date", ... %>
<%= date_select "report", "end_date", ... %>
#goes into controller -- add your own error handling/defaults, please!
report_start_date = build_date_from_params("start_date", params[:report])
report_end_date = build_date_from_params("end_date", params[:report])
2) CalendarDateSelect: Rather similar to the above, just with sexier visible UI.
3) Adapt a Javascript widget: Typically this means that some form element will have the date input as a string. Great news for you, since Date.parse is some serious magic. The params[:some_form_element_name] will be initialized by Rails for you.
#goes in controller. Please handle errors yourself -- Javascript != trusted input.
report_start_date = Date.parse(params[:report_start_date])
Writing the call to ActiveRecord
Easy as pie.
#initialize start_date and end_date up here, by pulling from params probably
@models = SomeModel.find(:all, :conditions => ['date >= ? and date <= ?',
start_date, end_date])
#do something with models
A: It's not an unRESTful practice to have URL parameters control the range of selected records. In your index action, you can do what Patrick suggested and have this:
#initialize start_date and end_date up here, by pulling from params probably
@models = SomeModel.find(:all, :conditions => ['date >= ? and date <= ?', params[:start_date], params[:end_date]])
Then in your index view, create a form that tacks on ?start_date=2008-01-01&end_date=2008-12-31 to the URL. Remember that it's user-supplied input, so be careful with it. If you put it back on the screen in your index action, be sure to do it like this:
Showing records starting on
<%= h start_date %>
and ending on
<%= h end_date %>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Iterate all files in a directory using a 'for' loop How can I iterate over each file in a directory using a for loop?
And how could I tell if a certain entry is a directory or if it's just a file?
A: Here's my go with comments in the code.
I'm just brushing up by biatch skills so forgive any blatant errors.
I tried to write an all in one solution as best I can with a little modification where the user requires it.
Some important notes: Just change the variable recursive to FALSE if you only want the root directories files and folders processed. Otherwise, it goes through all folders and files.
C&C most welcome...
@echo off
title %~nx0
chcp 65001 >NUL
set "dir=c:\users\%username%\desktop"
::
:: Recursive Loop routine - First Written by Ste on - 2020.01.24 - Rev 1
::
setlocal EnableDelayedExpansion
rem THIS IS A RECURSIVE SOLUTION [ALBEIT IF YOU CHANGE THE RECURSIVE TO FALSE, NO]
rem By removing the /s switch from the first loop if you want to loop through
rem the base folder only.
set recursive=TRUE
if %recursive% equ TRUE ( set recursive=/s ) else ( set recursive= )
endlocal & set recursive=%recursive%
cd /d %dir%
echo Directory %cd%
for %%F in ("*") do (echo → %%F) %= Loop through the current directory. =%
for /f "delims==" %%D in ('dir "%dir%" /ad /b %recursive%') do ( %= Loop through the sub-directories only if the recursive variable is TRUE. =%
echo Directory %%D
echo %recursive% | find "/s" >NUL 2>NUL && (
pushd %%D
cd /d %%D
for /f "delims==" %%F in ('dir "*" /b') do ( %= Then loop through each pushd' folder and work on the files and folders =%
echo %%~aF | find /v "d" >NUL 2>NUL && ( %= This will weed out the directories by checking their attributes for the lack of 'd' with the /v switch therefore you can now work on the files only. =%
rem You can do stuff to your files here.
rem Below are some examples of the info you can get by expanding the %%F variable.
rem Uncomment one at a time to see the results.
echo → %%~F &rem expands %%F removing any surrounding quotes (")
rem echo → %%~dF &rem expands %%F to a drive letter only
rem echo → %%~fF &rem expands %%F to a fully qualified path name
rem echo → %%~pF &rem expands %%A to a path only
rem echo → %%~nF &rem expands %%F to a file name only
rem echo → %%~xF &rem expands %%F to a file extension only
rem echo → %%~sF &rem expanded path contains short names only
rem echo → %%~aF &rem expands %%F to file attributes of file
rem echo → %%~tF &rem expands %%F to date/time of file
rem echo → %%~zF &rem expands %%F to size of file
rem echo → %%~dpF &rem expands %%F to a drive letter and path only
rem echo → %%~nxF &rem expands %%F to a file name and extension only
rem echo → %%~fsF &rem expands %%F to a full path name with short names only
rem echo → %%~dp$dir:F &rem searches the directories listed in the 'dir' environment variable and expands %%F to the fully qualified name of the first one found. If the environment variable name is not defined or the file is not found by the search, then this modifier expands to the empty string
rem echo → %%~ftzaF &rem expands %%F to a DIR like output line
)
)
popd
)
)
echo/ & pause & cls
A: To iterate through all files and folders you can use
for /F "delims=" %%a in ('dir /b /s') do echo %%a
To iterate through all folders only not with files, then you can use
for /F "delims=" %%a in ('dir /a:d /b /s') do echo %%a
Where /s will give all results throughout the directory tree in unlimited depth. You can skip /s if you want to iterate through the content of that folder not their sub folder
Implementing search in iteration
To iterate through a particular named files and folders you can search for the name and iterate using for loop
for /F "delims=" %%a in ('dir "file or folder name" /b /s') do echo %%a
To iterate through a particular named folders/directories and not files, then use /AD in the same command
for /F "delims=" %%a in ('dir "folder name" /b /AD /s') do echo %%a
A: There is a subtle difference between running FOR from the command line and from a batch file. In a batch file, you need to put two % characters in front of each variable reference.
From a command line:
FOR %i IN (*) DO ECHO %i
From a batch file:
FOR %%i IN (*) DO ECHO %%i
A: This lists all the files (and only the files) in the current directory and its subdirectories recursively:
for /r %i in (*) do echo %i
Also if you run that command in a batch file you need to double the % signs.
for /r %%i in (*) do echo %%i
(thanks @agnul)
A: for %1 in (*.*) do echo %1
Try "HELP FOR" in cmd for a full guide
This is the guide for XP commands. http://www.ss64.com/nt/
A: This for-loop will list all files in a directory.
pushd somedir
for /f "delims=" %%f in ('dir /b /a-d-h-s') do echo %%f
popd
"delims=" is useful to show long filenames with spaces in it....
'/b" show only names, not size dates etc..
Some things to know about dir's /a argument.
*
*Any use of "/a" would list everything, including hidden and system attributes.
*"/ad" would only show subdirectories, including hidden and system ones.
*"/a-d" argument eliminates content with 'D'irectory attribute.
*"/a-d-h-s" will show everything, but entries with 'D'irectory, 'H'idden 'S'ystem attribute.
If you use this on the commandline, remove a "%".
Hope this helps.
A: The following code creates a file Named "AllFilesInCurrentDirectorylist.txt" in the current Directory, which contains the list of all files (Only Files) in the current Directory. Check it out
dir /b /a-d > AllFilesInCurrentDirectorylist.txt
A: It could also use the forfiles command:
forfiles /s
and also check if it is a directory
forfiles /p c:\ /s /m *.* /c "cmd /c if @isdir==true echo @file is a directory"
A: Iterate through...
*
*...files in current dir: for %f in (.\*) do @echo %f
*...subdirs in current dir: for /D %s in (.\*) do @echo %s
*...files in current and all subdirs: for /R %f in (.\*) do @echo %f
*...subdirs in current and all subdirs: for /R /D %s in (.\*) do @echo %s
Unfortunately I did not find any way to iterate over files and subdirs at the same time.
Just use cygwin with its bash for much more functionality.
Apart from this: Did you notice, that the buildin help of MS Windows is a great resource for descriptions of cmd's command line syntax?
Also have a look here: http://technet.microsoft.com/en-us/library/bb490890.aspx
A: I would use vbscript (Windows Scripting Host), because in batch I'm sure you cannot tell that a name is a file or a directory.
In vbs, it can be something like this:
Dim fileSystemObject
Set fileSystemObject = CreateObject("Scripting.FileSystemObject")
Dim mainFolder
Set mainFolder = fileSystemObject.GetFolder(myFolder)
Dim files
Set files = mainFolder.Files
For Each file in files
...
Next
Dim subFolders
Set subFolders = mainFolder.SubFolders
For Each folder in subFolders
...
Next
Check FileSystemObject on MSDN.
A: Try this to test if a file is a directory:
FOR /F "delims=" %I IN ('DIR /B /AD "filename" 2^>^&1 ^>NUL') DO IF "%I" == "File Not Found" ECHO Not a directory
This only will tell you whether a file is NOT a directory, which will also be true if the file doesn't exist, so be sure to check for that first if you need to. The carets (^) are used to escape the redirect symbols and the file listing output is redirected to NUL to prevent it from being displayed, while the DIR listing's error output is redirected to the output so you can test against DIR's message "File Not Found".
A: I use the xcopy command with the /L option to get the file names. So if you want to get either a directory or all the files in the subdirectory you could do something like this:
for /f "delims=" %%a IN ('xcopy "D:\*.pdf" c:\ /l') do echo %%a
I just use the c:\ as the destination because it always exists on windows systems and it is not copying so it does not matter. if you want the subdirectories too just use /s option on the end. You can also use the other switches of xcopy if you need them for other reasons.
A: try this:
::Example directory
set SetupDir=C:\Users
::Loop in the folder with "/r" to search in recursive folders, %%f being a loop ::variable
for /r "%SetupDir%" %%f in (*.msi *.exe) do set /a counter+=1
echo there are %counter% files in your folder
it counts .msi and .exe files in your directory (and in the sub directory). So it also makes the difference between folders and files as executables.
Just add an extension (.pptx .docx ..) if you need to filter other files in the loop
A: %1 refers to the first argument passed in and can't be used in an iterator.
Try this:
@echo off
for %%i in (*.*) do echo %%i
A: To iterate over each file a for loop will work:
for %%f in (directory\path\*) do ( something_here )
In my case I also wanted the file content, name, etc.
This lead to a few issues and I thought my use case might help. Here is a loop that reads info from each '.txt' file in a directory and allows you do do something with it (setx for instance).
@ECHO OFF
setlocal enabledelayedexpansion
for %%f in (directory\path\*.txt) do (
set /p val=<%%f
echo "fullname: %%f"
echo "name: %%~nf"
echo "contents: !val!"
)
*Limitation: val<=%%f will only get the first line of the file.
A: I had trouble getting jop's answer to work with an absolute path until I found this reference: https://ss64.com/nt/for_r.html
The following example loops through all files in a directory given by the absolute path.
For /R C:\absoulte\path\ %%G IN (*.*) do (
Echo %%G
)
A: In my case I had to delete all the files and folders underneath a temp folder. So this is how I ended up doing it. I had to run two loops one for file and one for folders. If files or folders have spaces in their names then you have to use " "
cd %USERPROFILE%\AppData\Local\Temp\
rem files only
for /r %%a in (*) do (
echo deleting file "%%a" ...
if exist "%%a" del /s /q "%%a"
)
rem folders only
for /D %%a in (*) do (
echo deleting folder "%%a" ...
if exist "%%a" rmdir /s /q "%%a"
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "435"
}
|
Q: Sorting nvarchar column as integer I have mixed data i nvarchar column (words and numbers).
Which is fastest way to sort data in this column in Numeric Order.
Result example:
*
*1
*2
*3
*...
*10
*11
*...
*aaaa
*aaab
*b
*ba
*ba
*...
A: If you left pad your numbers with 0's and sort on that, you will get your desired results. You'll need to make sure that the number of 0's you pad with matches the size of the varchar column.
Take a look at this example...
Declare @Temp Table(Data VarChar(20))
Insert Into @Temp Values('1')
Insert Into @Temp Values('2')
Insert Into @Temp Values('3')
Insert Into @Temp Values('10')
Insert Into @Temp Values('11')
Insert Into @Temp Values('aaaa')
Insert Into @Temp Values('aaab')
Insert Into @Temp Values('b')
Insert Into @Temp Values('ba')
Insert Into @Temp Values('ba')
Select * From @Temp
Order By Case When IsNumeric(Data) = 1
Then Right('0000000000000000000' + Data, 20)
Else Data End
Also note that it is important when using a case statement that each branch of the case statement returns the same data type, or else you will get incorrect results or an error.
A: Use this:
ORDER BY
CASE WHEN ISNUMERIC(column) = 1 THEN 0 ELSE 1 END,
CASE WHEN ISNUMERIC(column) = 1 THEN CAST(column AS INT) ELSE 0 END,
column
This works as expected.
Note: You say fastest way. This sql was fast for me to produce, but the execution plan shows a table-scan, followed by a scalar computation. This could possibly produce a temporary result containing all the values of that column with some extra temporary columns for the ISNUMERIC results. It might not be fast to execute.
A: --check for existance
if exists (select * from dbo.sysobjects where [id] = object_id(N'dbo.t') AND objectproperty(id, N'IsUserTable') = 1)
drop table dbo.t
go
--create example table
create table dbo.t (c varchar(10) not null)
set nocount on
--populate example table
insert into dbo.t (c) values ('1')
insert into dbo.t (c) values ('2')
insert into dbo.t (c) values ('3 ')
insert into dbo.t (c) values ('10 ')
insert into dbo.t (c) values ('11')
insert into dbo.t (c) values ('aaaa')
insert into dbo.t (c) values ('aaab')
insert into dbo.t (c) values ('b')
insert into dbo.t (c) values ('ba')
insert into dbo.t (c) values ('ba')
--return the data
select c from dbo.t
order by case when isnumeric(c) = 1 then 0 else 1 end,
case when isnumeric(c) = 1 then cast(c as int) else 0 end,
c
A: You can either treat the data as alphanumeric, or numeric, not both at the same time. I don't think what you're trying to do is possible, the data model isn't set up appropriately.
A: Cast it.
SELECT * FROM foo ORDER BY CAST(somecolumn AS int);
Been a while since I've touched SQL Server, so my syntax might be entirely incorrect though :)
A:
I don't think what you're trying to do
is possible
This example works fine
SELECT * FROM TableName
ORDER BY CASE WHEN 1 = IsNumeric(ColumnName) THEN Cast(ColumnName AS INT) END
Result is:
*
*a
*b
*c
*...
*1
*2
*3
But i need numbers first.
A: This should work :
select * from Table order by ascii(Column)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Pure Python XSLT library Is there an XSLT library that is pure Python?
Installing libxml2+libxslt or any similar C libraries is a problem on some of the platforms I need to support.
I really only need basic XSLT support, and speed is not a major issue.
A: I don't think you can do it in cpython: there are no pure python XSLT implementations.
But you can trivially do it in jython, using the inbuilt XSLT APIs of the JVM. I wrote a blog post for the specific case of doing it on Google AppEngine, but the code given should work under jython in anyn circumstances.
Transforming with XSLT on Google AppEngine and jython
http://jython.xhaus.com/transforming-with-xslt-on-google-appengine-and-jython/
HTH,
Alan.
A: Unfortunately there are no pure-python XSLT processors at the moment. If you need something that is more platform independent, you may want to use a Java-based XSLT processor like Saxon. 4Suite is working on a pure-python XPath parser, but it doesn't look like a pure XSLT processor will be out for some time. Perhaps it would be best to use some of Python's functional capabilities to try and approximate the existing stylesheet or look into the feasibility of using Java instead.
A: Have you looked at 4suite?
A: If you only need basic support, and your XML isn't too crazy, consider removing the XSLT element from the equation and just using a DOM/SAX parser.
Here's some info from the PythonInfo Wiki:
[DOM] sucks up an entire XML file,
holds it in memory, and lets you work
with it. Sax, on the other hand, emits
events as it goes step by step through
the file.
What do you think?
A: There's also http://lxml.de/
"lxml is the most feature-rich and easy-to-use library for processing XML and HTML in the Python language."
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Easiest and quickest way of Web enabling old VBA apps Given an small Excel VBA app (single form, small amount of records saved in single worksheet) that runs quite happily locally on the workstation, what would be the easiest and quickest way of providing the same app on the web? Re-writing the app is an option but I thought it was worth checking if there was a quicker solution out there.
A: If your business logic is fairly well-separated from the GUI code you could try wrapping it up as an OLE Automation object (called a COM object in VB6 I think) which you then use in an ASP-powered web application. The ASP part will be written in VBScript and use the COM object to do the calculations.
The problem here is that the usual VBA development style mingles GUI code and business logic together in the same subroutines and as a result teasing them apart and replacing the GUI with an ASP page will be more trouble than rewriting from scratch.
There is another way to write a web application using VB6 called Webclasses, but I do not recommend it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: MVC, where do the classes go? My understanding of the MVC is as follows (incase it's horribly wrong, I am afterall new to it)
*
*Models are the things that interface with the database
*Views are the design/layout of the page
*Controllers are where everything starts and are essentially the page logic
I'm using CodeIgniter but I would hazard a guess it's not just limited to that or possibly even just to PHP frameworks.
Where do I put global classes?
I may have a model for Products and I then run a query that collects 20 products from the database. Do I now make 20 models or should I have a separate class for it, if the latter, where do I put this class (other controllers will need to use it too)
A: Model is the wrong word to use when discussing what to do with products: each product is a value object (VO) (or data transfer objet/DTO, whatever fits in your mouth better). Value objects generally have the same fields that a table contains. In your case ProductVO should have the fields that are in Products table.
Model is a Data Access Object (DAO) that has methods like
findByPk --> returns a single value object
findAll --> returns a collection of value objects (0-n)
etc.
In your case you would have a ProductDAO that has something like the above methods. This ProductDAO would then return ProductVO's and collections of them.
Data Access Objects can also return Business Objects (BO) which may contain multiple VO's and additional methods that are business case specific.
Addendum:
In your controller you call a ProductDAO to find the products you want.
The returned ProductVO(s) are then passed to the view (as request attributes in Java). The view then loops through/displays the data from the productVO's.
A: Model is part of your application where business logic happens. Model represents real life relations and dependencies between objects, like: Employee reports to a Manager, Manager supervises many Employees, Manager can assign Task to Employee, Task sends out notification when overdue. Model CAN and most often DO interface with database, but this is not a requirement.
View is basically everything that can be displayed or help in displaying. View contains templates, template objects, handles template composition and nesting, wraps with headers and footers, and produces output in one of well known formats (X/HTML, but also XML, RSS/Atom, CSV).
Controller is a translation layer that translates user actions to model operations. In other words, it tells model what to do and returns a response. Controller methods should be as small as possible and all business processing should be done in Model, and view logic processing should take place in View.
Now, back to your question. It really depends if you need separate class for each product. In most cases, one class will suffice and 20 instances of it should be created. As products represent business logic it should belong to Model part of your application.
A: In CakePHP there are 3 more "parts" :
*
*Behaviors
*Components
*Helpers
Logic that are used by many models should be made as a behavior. I do not know if CodeIgniter have this logic or not, but if it doesnt, I would try to implement it as such. You can read about behaviors here.
(Components helps controller share logic and helpers help views in the same way).
A: The simplest way is to:
*
*Have a model class per database table. In this case it would be an object that held all the Product details.
*Put these classes into a package/namespace, e.g., com.company.model (Java / C#)
*Put the DAO classes into a package like com.company.model.dao
*Your view will consume data from the session/request/controller In this case I would have a List<Product>.
*Oh, you're using PHP. Dunno how that changes things, but I imagine it has a Collections framework like any modern language.
A: @Alexander mentions CakePHPs Behaviors, Components and Helpers. These are excellent for abstracting out common functionality. I find the Behaviors particularly useful as of course the bulk of the business logic is carried in the models. I am currently working on a project where we have behaviors like:
*
*Lockable
*Publishable
*Tagable
*Rateable
*Commentable
etc.
For code that transcends even the MVC framework i.e. code libraries that you use for various things that are not tied in to the particular framework you are using - in our case things like video encoding classes etc. CakePHP has the vendors folder.
Anything that effectively has nothing to do with CakePHP goes in there.
I suspect CodeIgniter doesn't have quite as flexible a structure, it's smaller and lighter than CakePHP, but a quick look at the CakePHP Manual to see how Behaviors, Components, Helpers, and the Vendors folder may be helpful.
It should be an easy matter to just include some common helper classes from your models keep nice and DRY
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: What are Java command line options to set to allow JVM to be remotely debugged? I know there's some JAVA_OPTS to set to remotely debug a Java program.
What are they and what do they mean ?
A: java
java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8001,suspend=y -jar target/cxf-boot-simple-0.0.1-SNAPSHOT.jar
address specifies the port at which it will allow to debug
Maven
**Debug Spring Boot app with Maven:
mvn spring-boot:run -Drun.jvmArguments=**"-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8001"
A: Before Java 5.0, use -Xdebug and -Xrunjdwp arguments. These options will still work in later versions, but it will run in interpreted mode instead of JIT, which will be slower.
From Java 5.0, it is better to use the -agentlib:jdwp single option:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1044
Options on -Xrunjdwp or agentlib:jdwp arguments are :
*
*transport=dt_socket : means the way used to connect to JVM (socket is a good choice, it can be used to debug a distant computer)
*address=8000 : TCP/IP port exposed, to connect from the debugger,
*suspend=y : if 'y', tell the JVM to wait until debugger is attached to begin execution, otherwise (if 'n'), starts execution right away.
A: Command Line
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT_NUMBER
Gradle
gradle bootrun --debug-jvm
Maven
mvn spring-boot:run -Drun.jvmArguments="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=PORT_NUMBER
A: I have this article bookmarked on setting this up for Java 5 and below.
Basically run it with:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044
For Java 5 and above, run it with:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1044
If you want Java to wait for you to connect before executing the application, replace suspend=n with suspend=y.
A: Here is the easiest solution.
There are a lot of environment special configurations needed if you are using Maven. So, if you start your program from maven, just run the mvnDebug command instead of mvn, it will take care of starting your app with remote debugging configurated. Now you can just attach a debugger on port 8000.
It'll take care of all the environment problems for you.
A: For java 1.5 or greater:
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 <YourAppName>
For java 1.4:
java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 <YourAppName>
For java 1.3:
java -Xnoagent -Djava.compiler=NONE -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 <YourAppName>
Here is output from a simple program:
java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044 HelloWhirled
Listening for transport dt_socket at address: 1044
Hello whirled
A: Since Java 9.0 JDWP supports only local connections by default.
http://www.oracle.com/technetwork/java/javase/9-notes-3745703.html#JDK-8041435
For remote debugging one should run program with *: in address:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000
A: -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=PORT_NUMBER
Here we just use a Socket Attaching Connector, which is enabled by default when the dt_socket transport is configured and the VM is running in the server debugging mode.
For more details u can refer to : https://stackify.com/java-remote-debugging/
A: If you are using java 9 or higher, to remotely debug (which is also the case when you use docker at local) you have to provide --debug *:($port). Because from java 9 --debug ($port) will only allow to debug at local, not remotely.
So, you can provide command in docker-compose like
command: -- /opt/jboss/wildfly/bin/standalone.sh --debug *:8787
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "380"
}
|
Q: How can I set the ValidationGroup dynamically I have a ASP.NET 2.0 webpage with 2 UserControls (.ascx). Each UserControl contains a bunch of validators. Placing a ValidationSummary on the page will display all validation errors, of both UserControl's. Placing a ValidationSummary in each UserControl will display all the errors of both controls twice.
What I want is a ValidationSummary for each UserControl, displaying only the errors on that UserControl.
I've tried to solve this by setting the ValidationGroup property of the validators on each usercontrol dynamicaly. That way each validationsummary should display only the errors of its UserControl. I've used this code:
foreach (Control ctrl in this.Controls)
{
if (ctrl is BaseValidator)
{
(ctrl as BaseValidator).ValidationGroup = this.ClientID;
}
}
ValidationSummary1.ValidationGroup = this.ClientID;
This however seems to disable both clientside and server side validation, because no validation occurs when submitting the form.
Help?
A: The control that is causing your form submission (i.e. a Button control) has to be a part of the same validation group as any ValidationSummary and *Validator controls.
A: If you use ValidationGroups, the validation only occurs if the control causing the postback is assign to the same ValidationGroup.
If you want to use a single control to postback you can still do this but you would need to explicitly call the Page.Validate method.
Page.Validate(MyValidationGroup1);
Page.Validate(MyValidationGroup2);
if(Page.IsValid)
{
//do stuff
}
Suggestion:
Why don't you expose a public property on your user controls called ValidationGroup?
In the setter you could explicitly set the validation group for each validator. You could also use your loop, but it would be more efficient to set each validator explicitly. This might improve the readability of the code using the user controls.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Text diff visualization control for WinForms or WPF In continuation of the my previous question, are the any good controls for text diffs visualization?
Something like StackOverflow's revision diff viewer but for WinForms or WPF
Requirements:
*
*free, preferably open-source
*based on WPF or WinForms
No apps please, only components.
I'm not interested in OSS diff tools
A: might also want to take a look at MeneesDiffUtils. Has a bunch of diff related utils including a visualization control. Full source code provided and is under a license called CharityWare.
A: I recommend DiffPlex. It is netstandard1.0 and very light weight.
You can easily embed it in your WPF application using the RichTextBox like this: https://github.com/halllo/WpfDiff
A: I never heard about a specific .NET component for diff visualization (it's kind a niche), but perhaps you could rely on advanced editor to build your own without too much trouble.
Syncfusion proposes a complete component suite dedicated to text/code edition, with a lot of built in features that you might find useful :
*
*Line and selection background
*Text highlighting (colored underline, waveline, borders, strike, etc.)
*Selection margin
*Text selection
*Syntax highlighting
Important note : it is not free.
A: There is (as part of GitSharp) an open source diff engine in c# with a very easy to use API and (as part of GitSharp.Demo) a WPF diff viewer. The code should not be too difficult to extract from the project.
Find more information here: http://www.eqqon.com/index.php/GitSharp#GitSharp.Demo
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Is it feasible to compile Python to machine code? How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup).
A: As @Greg Hewgill says it, there are good reasons why this is not always possible. However, certain kinds of code (like very algorithmic code) can be turned into "real" machine code.
There are several options:
*
*Use Psyco, which emits machine code dynamically. You should choose carefully which methods/functions to convert, though.
*Use Cython, which is a Python-like language that is compiled into a Python C extension
*Use PyPy, which has a translator from RPython (a restricted subset of Python that does not support some of the most "dynamic" features of Python) to C or LLVM.
*
*PyPy is still highly experimental
*not all extensions will be present
After that, you can use one of the existing packages (freeze, Py2exe, PyInstaller) to put everything into one binary.
All in all: there is no general answer for your question. If you have Python code that is performance-critical, try to use as much builtin functionality as possible (or ask a "How do I make my Python code faster" question). If that doesn't help, try to identify the code and port it to C (or Cython) and use the extension.
A: Some extra references:
*
*https://github.com/dropbox/pyston is a JIT compiler for Python developped by Dropbox
*http://pythran.readthedocs.io/ is a compile-time python to C++ translator for scientific computing
*https://github.com/cosmo-ethz/hope is a JIT python to C++ translator for scientific computing
A: Try ShedSkin Python-to-C++ compiler, but it is far from perfect. Also there is Psyco - Python JIT if only speedup is needed. But IMHO this is not worth the effort. For speed-critical parts of code best solution would be to write them as C/C++ extensions.
A: Jython has a compiler targeting JVM bytecode. The bytecode is fully dynamic, just like the Python language itself! Very cool. (Yes, as Greg Hewgill's answer alludes, the bytecode does use the Jython runtime, and so the Jython jar file must be distributed with your app.)
A: Psyco is a kind of just-in-time (JIT) compiler: dynamic compiler for Python, runs code 2-100 times faster, but it needs much memory.
In short: it run your existing Python software much faster, with no change in your source but it doesn't compile to object code the same way a C compiler would.
A: The answer is "Yes, it is possible". You could take Python code and attempt to compile it into the equivalent C code using the CPython API. In fact, there used to be a Python2C project that did just that, but I haven't heard about it in many years (back in the Python 1.5 days is when I last saw it.)
You could attempt to translate the Python code into native C as much as possible, and fall back to the CPython API when you need actual Python features. I've been toying with that idea myself the last month or two. It is, however, an awful lot of work, and an enormous amount of Python features are very hard to translate into C: nested functions, generators, anything but simple classes with simple methods, anything involving modifying module globals from outside the module, etc, etc.
A: This doesn't compile Python to machine code. But allows to create a shared library to call Python code.
If what you are looking for is an easy way to run Python code from C without relying on execp stuff. You could generate a shared library from python code wrapped with a few calls to Python embedding API. Well the application is a shared library, an .so that you can use in many other libraries/applications.
Here is a simple example which create a shared library, that you can link with a C program. The shared library executes Python code.
The python file that will be executed is pythoncalledfromc.py:
# -*- encoding:utf-8 -*-
# this file must be named "pythoncalledfrom.py"
def main(string): # args must a string
print "python is called from c"
print "string sent by «c» code is:"
print string
print "end of «c» code input"
return 0xc0c4 # return something
You can try it with python2 -c "import pythoncalledfromc; pythoncalledfromc.main('HELLO'). It will output:
python is called from c
string sent by «c» code is:
HELLO
end of «c» code input
The shared library will be defined by the following by callpython.h:
#ifndef CALL_PYTHON
#define CALL_PYTHON
void callpython_init(void);
int callpython(char ** arguments);
void callpython_finalize(void);
#endif
The associated callpython.c is:
// gcc `python2.7-config --ldflags` `python2.7-config --cflags` callpython.c -lpython2.7 -shared -fPIC -o callpython.so
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <python2.7/Python.h>
#include "callpython.h"
#define PYTHON_EXEC_STRING_LENGTH 52
#define PYTHON_EXEC_STRING "import pythoncalledfromc; pythoncalledfromc.main(\"%s\")"
void callpython_init(void) {
Py_Initialize();
}
int callpython(char ** arguments) {
int arguments_string_size = (int) strlen(*arguments);
char * python_script_to_execute = malloc(arguments_string_size + PYTHON_EXEC_STRING_LENGTH);
PyObject *__main__, *locals;
PyObject * result = NULL;
if (python_script_to_execute == NULL)
return -1;
__main__ = PyImport_AddModule("__main__");
if (__main__ == NULL)
return -1;
locals = PyModule_GetDict(__main__);
sprintf(python_script_to_execute, PYTHON_EXEC_STRING, *arguments);
result = PyRun_String(python_script_to_execute, Py_file_input, locals, locals);
if(result == NULL)
return -1;
return 0;
}
void callpython_finalize(void) {
Py_Finalize();
}
You can compile it with the following command:
gcc `python2.7-config --ldflags` `python2.7-config --cflags` callpython.c -lpython2.7 -shared -fPIC -o callpython.so
Create a file named callpythonfromc.c that contains the following:
#include "callpython.h"
int main(void) {
char * example = "HELLO";
callpython_init();
callpython(&example);
callpython_finalize();
return 0;
}
Compile it and run:
gcc callpythonfromc.c callpython.so -o callpythonfromc
PYTHONPATH=`pwd` LD_LIBRARY_PATH=`pwd` ./callpythonfromc
This is a very basic example. It can work, but depending on the library it might be still difficult to serialize C data structures to Python and from Python to C. Things can be automated somewhat...
Nuitka might be helpful.
Also there is numba but they both don't aim to do what you want exactly. Generating a C header from Python code is possible, but only if you specify the how to convert the Python types to C types or can infer that information. See python astroid for a Python ast analyzer.
A: Nuitka is a Python to C++ compiler that links against libpython. It appears to be a relatively new project. The author claims a speed improvement over CPython on the pystone benchmark.
A: PyPy is a project to reimplement Python in Python, using compilation to native code as one of the implementation strategies (others being a VM with JIT, using JVM, etc.). Their compiled C versions run slower than CPython on average but much faster for some programs.
Shedskin is an experimental Python-to-C++ compiler.
Pyrex is a language specially designed for writing Python extension modules. It's designed to bridge the gap between the nice, high-level, easy-to-use world of Python and the messy, low-level world of C.
A: Pyrex is a subset of the Python language that compiles to C, done by the guy that first built list comprehensions for Python. It was mainly developed for building wrappers but can be used in a more general context. Cython is a more actively maintained fork of pyrex.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "151"
}
|
Q: Java Swing: Ctrl+F1 does not work globally, but each other key combination I have a swing gui with a tabbed pane in the north. Several key events are added to its input map:
InputMap paneInputMap = pane.getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT);
paneInputMap.put( KeyStroke.getKeyStroke( KeyEvent.VK_E, KeyEvent.CTRL_MASK ), "finish");
paneInputMap.put( KeyStroke.getKeyStroke( KeyEvent.VK_F1, KeyEvent.CTRL_MASK ), "toggletoolbar");
If the tabbed pane or another button in a toolbar has the focus, Ctrl+F1 has no function. If another component is focused (e.g. JTree), Ctrl+F1 executes the action.
The problem is, that it workes everywhere if I change the Keycode to e.g. VK_F2.
The key F1 is'nt used anywhere else in the program.
Any idea?
Thanks,
André
Edit: A full text search in the java source code gave the answer: The ToolTipManager registeres the Key Ctrl+F1 to display the tooltip text if the key combination is pressed. So if a button with a tooltip is focused, Ctrl+F1 is handled by the ToolTipManager. Otherwise my action is called.
A: So that this gets an answer, here's the solution copied from your edit in the question. ;-)
The ToolTipManager registeres the Key
Ctrl+F1 to display the tooltip text if
the key combination is pressed. So if
a button with a tooltip is focused,
Ctrl+F1 is handled by the
ToolTipManager. Otherwise my action is
called.
A: May be the OS retargets the F1 key? Install a key listener and see what events are handled.
BTW: It would help if you could edit your question and insert some testable code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Can't view full-text catalogs in SQL Server 2008 with Advanced Services I'm using SQL Server 2008 with Advanced Services on my Vista Home Premium. I'd installed Full-text searching during installation, The SQL Full-text Filter Daemon Launcher is running thorough an admin user account. When I go to a database through the SQL Server Management Studio, I don't see the "Storage" option under the database, so I can't create/edit my Full-text catalogs.
I was able to create a Full-text catalog through T-SQL, and can run Full-text searches on the columns I've selected in the database. I'm just not able to see the "Storage" option.
Any idea what's missing?
A: Ok, the answer is:
SQL Management Studio Basic doesn't support Full-text catalogs. Catalogs can only be created via T-SQL. This is a feature that might be added in the future.
A: First,
EXEC sp_fulltext_database 'enable';
GO
Then,
CREATE FULLTEXT CATALOG ftcatalog;
GO
Then,
CREATE FULLTEXT INDEX ON dbo.Tablename ( Column )
KEY INDEX PK_PriKeyIndex
ON ftcatalog
WITH CHANGE_TRACKING AUTO
GO
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to register file types/extensions with a WiX installer? I didn't find an explicit answer to this question in the WiX Documentation (or Google, for that matter). Of course I could just write the appropriate registry keys in HKCR, but it makes me feel dirty and I'd expect this to be a standard task which should have a nice default solution.
For bonus points, I'd like to know how to make it "safe", i.e. don't overwrite existing registrations for the file type and remove the registration on uninstall only if it has been registered during installation and is unchanged.
A: "If your application handles its own file data type, you will need to register a file association for it. Put a ProgId inside your component. FileId should refer to the Id attribute of the File element describing the file meant to handle the files of this extension. Note the exclamation mark: it will return the short path of the file instead of the long one:"
<ProgId Id='AcmeFoobar.xyzfile' Description='Acme Foobar data file'>
<Extension Id='xyz' ContentType='application/xyz'>
<Verb Id='open' Sequence='10' Command='Open' Target='[!FileId]' Argument='"%1"' />
</Extension>
</ProgId>
Reference: https://www.firegiant.com/wix/tutorial/getting-started/beyond-files/
A: Unfortunately there's no way to do a "safe" association with Windows Installer.
We just write everything out to the registry and then have a separate component that takes over the system-wide default and is only installed if no other application has already registered itself as the default.
With Vista there's the new "default programs" interface, again you write everything out to the registry. Here's a complete example that we're using in our installer. (WiX 3.0)
Update: 12 months have passed since my original answer and I have a better understanding of file associations. Rather than writing everything manually I'm now using proper ProgId definitions which improves handling for advertised packages. See the updated code posted in response to this question.
<Component ....>
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities" Name="ApplicationDescription" Value="ACME Foobar XYZ Editor" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities" Name="ApplicationIcon" Value="[APPLICATIONFOLDER]AcmeFoobar.exe,0" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities" Name="ApplicationName" Value="ACME Foobar" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities\DefaultIcon" Value="[APPLICATIONFOLDER]AcmeFoobar.exe,1" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities\FileAssociations" Name=".xyz" Value="AcmeFoobar.Document" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities\MIMEAssociations" Name="application/xyz" Value="AcmeFoobar.Document" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\AcmeFoobar\Capabilities\shell\Open\command" Value=""[APPLICATIONFOLDER]AcmeFoobar.exe" "%1"" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\RegisteredApplications" Name="Acme Foobar" Value="SOFTWARE\AcmeFoobar\Capabilities" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\.xyz" Name="Content Type" Value="application/xyz" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\.xyz\AcmeFoobar.Document\ShellNew" Value="" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\.xyz\OpenWithList\AcmeFoobar.exe" Value="" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\.xyz\OpenWithProgids" Name="AcmeFoobar.Document" Value="" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\Applications\AcmeFoobar.exe\SupportedTypes" Name=".xyz" Value="" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\Applications\AcmeFoobar.exe\shell\open" Name="FriendlyAppName" Value="ACME Foobar" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\AcmeFoobar.exe" Value="[!AcmeFoobar.exe]" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\AcmeFoobar.exe" Name="Path" Value="[APPLICATIONFOLDER]" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\SystemFileAssociations\.xyz\shell\edit.AcmeFoobar.exe" Value="Edit with ACME Foobar" Type="string" />
<RegistryValue Root="HKLM" Key="SOFTWARE\Classes\SystemFileAssociations\.xyz\shell\edit.AcmeFoobar.exe\command" Value=""[APPLICATIONFOLDER]AcmeFoobar.exe" "%1"" Type="string" />
</Component>
<Component ....>
<ProgId Id="AcmeFoobar.Document" Description="ACME XYZ Document">
<Extension Id="pdf" ContentType="application/xyz">
<Verb Id="open" Command="Open" TargetFile="[APPLICATIONFOLDER]AcmeFoobar.exe" Argument="%1" />
</Extension>
</ProgId>
<Condition><![CDATA[DEFAULTVIEWER=1]]></Condition>
</Component>
A: After some additional research, I found a partial answer to this question in the WiX Tutorial. It shows an advertised solution and does not work with WiX 3.0, but given that information, I figured it out. Add a ProgId element to the component containing your executable, like the following:
<ProgId Id="MyApplication.MyFile" Description="My file type">
<Extension Id="myext" ContentType="application/whatever">
<Verb Id="open" Command="open" TargetFile="MyApplication.exe" Argument=""%1""/>
</Extension>
</ProgId>
myext is the file extension without the dot, and MyApplication.exe is the file id (not name) of the executable file (i.e. the Id attribute of the File element).
This will register the file type with your executable and will supply a default icon (a white page with the application icon on it), which is sufficient for my needs. If you want to specify a dedicated icon, it seems you still have to do this yourself, like the following (code from the linked tutorial):
<Registry Id='FooIcon1' Root='HKCR' Key='.xyz' Action='write' Type='string' Value='AcmeFoobar.xyzfile' />
<Registry Id='FooIcon2' Root='HKCR' Key='AcmeFoobar.xyzfile' Action='write' Type='string' Value='Acme Foobar data file' />
<Registry Id='FooIcon3' Root='HKCR' Key='AcmeFoobar.xyzfile\DefaultIcon' Action='write' Type='string' Value='[INSTALLDIR]Foobar.exe,1' />
I didn't find a good solution for my bonus question though.
Edit: I started writing this before the previous answer came. However, my solution actually works, in contrast to the previous answer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Can Regex be used for this particular string manipulation? I need to replace character (say) x with character (say) P in a string, but only if it is contained in a quoted substring.
An example makes it clearer:
axbx'cxdxe'fxgh'ixj'k -> axbx'cPdPe'fxgh'iPj'k
Let's assume, for the sake of simplicity, that quotes always come in pairs.
The obvious way is to just process the string one character at a time (a simple state machine approach);
however, I'm wondering if regular expressions can be used to do all the processing in one go.
My target language is C#, but I guess my question pertains to any language having builtin or library support for regular expressions.
A: I converted Greg Hewgill's python code to C# and it worked!
[Test]
public void ReplaceTextInQuotes()
{
Assert.AreEqual("axbx'cPdPe'fxgh'iPj'k",
Regex.Replace("axbx'cxdxe'fxgh'ixj'k",
@"x(?=[^']*'([^']|'[^']*')*$)", "P"));
}
That test passed.
A: I was able to do this with Python:
>>> import re
>>> re.sub(r"x(?=[^']*'([^']|'[^']*')*$)", "P", "axbx'cxdxe'fxgh'ixj'k")
"axbx'cPdPe'fxgh'iPj'k"
What this does is use the non-capturing match (?=...) to check that the character x is within a quoted string. It looks for some nonquote characters up to the next quote, then looks for a sequence of either single characters or quoted groups of characters, until the end of the string.
This relies on your assumption that the quotes are always balanced. This is also not very efficient.
A: A more general (and simpler) solution which allows non-paired quotes.
*
*Find quoted string
*Replace 'x' by 'P' in the string
#!/usr/bin/env python
import re
text = "axbx'cxdxe'fxgh'ixj'k"
s = re.sub("'.*?'", lambda m: re.sub("x", "P", m.group(0)), text)
print s == "axbx'cPdPe'fxgh'iPj'k", s
# -> True axbx'cPdPe'fxgh'iPj'k
A: The trick is to use non-capturing group to match the part of the string following the match (character x) we are searching for.
Trying to match the string up to x will only find either the first or the last occurence, depending whether non-greedy quantifiers are used.
Here's Greg's idea transposed to Tcl, with comments.
set strIn {axbx'cxdxe'fxgh'ixj'k}
set regex {(?x) # enable expanded syntax
# - allows comments, ignores whitespace
x # the actual match
(?= # non-matching group
[^']*' # match to end of current quoted substring
##
## assuming quotes are in pairs,
## make sure we actually were
## inside a quoted substring
## by making sure the rest of the string
## is what we expect it to be
##
(
[^']* # match any non-quoted substring
| # ...or...
'[^']*' # any quoted substring, including the quotes
)* # any number of times
$ # until we run out of string :)
) # end of non-matching group
}
#the same regular expression without the comments
set regexCondensed {(?x)x(?=[^']*'([^']|'[^']*')*$)}
set replRegex {P}
set nMatches [regsub -all -- $regex $strIn $replRegex strOut]
puts "$nMatches replacements. "
if {$nMatches > 0} {
puts "Original: |$strIn|"
puts "Result: |$strOut|"
}
exit
This prints:
3 replacements.
Original: |axbx'cxdxe'fxgh'ixj'k|
Result: |axbx'cPdPe'fxgh'iPj'k|
A: #!/usr/bin/perl -w
use strict;
# Break up the string.
# The spliting uses quotes
# as the delimiter.
# Put every broken substring
# into the @fields array.
my @fields;
while (<>) {
@fields = split /'/, $_;
}
# For every substring indexed with an odd
# number, search for x and replace it
# with P.
my $count;
my $end = $#fields;
for ($count=0; $count < $end; $count++) {
if ($count % 2 == 1) {
$fields[$count] =~ s/a/P/g;
}
}
Wouldn't this chunk do the job?
A: Not with plain regexp. Regular expressions have no "memory" so they cannot distinguish between being "inside" or "outside" quotes.
You need something more powerful, for example using gema it would be straighforward:
'<repl>'=$0
repl:x=P
A: Similar discussion about balanced text replaces: Can regular expressions be used to match nested patterns?
Although you can try this in Vim, but it works well only if the string is on one line, and there's only one pair of 's.
:%s:\('[^']*\)x\([^']*'\):\1P\2:gci
If there's one more pair or even an unbalanced ', then it could fail. That's way I included the c a.k.a. confirm flag on the ex command.
The same can be done with sed, without the interaction - or with awk so you can add some interaction.
One possible solution is to break the lines on pairs of 's then you can do with vim solution.
A: Pattern: (?s)\G((?:^[^']*'|(?<=.))(?:'[^']*'|[^'x]+)*+)x
Replacement: \1P
*
*\G — Anchor each match at the end of the previous one, or the start of the string.
*(?:^[^']*'|(?<=.)) — If it is at the beginning of the string, match up to the first quote.
*(?:'[^']*'|[^'x]+)*+ — Match any block of unquoted characters, or any (non-quote) characters up to an 'x'.
One sweep trough the source string, except for a single character look-behind.
A: Sorry to break your hopes, but you need a push-down automata to do that. There is more info here:
Pushdown Automaton
In short, Regular expressions, which are finite state machines can only read and has no memory while pushdown automaton has a stack and manipulating capabilities.
Edit: spelling...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to convert HTML to XHTML? I need to convert HTML documents into valid XML, preferably XHTML. What's the best way to do this? Does anybody know a toolkit/library/sample/...whatever that helps me to get that task done?
To be a bit more clear here, my application has to do the conversion automatically at runtime. I don't look for a tool that helps me to move some pages to XHTML manually.
A: You can use a HTML Agility Pack. Its open-source project from CodePlex.
A: The Validator.nu HTML Parser comes with an HTML2XML sample program that does the conversion using the HTML5 parsing algorithm and infoset coercion rules.
A: Use Html2Xhtml for .NET 4.0:
In-memory string-to-string conversion:
var xhtml = Html2Xhtml.RunAsFilter(stdin => stdin.Write(html)).ReadToEnd();
In-memory string-to-XDocument conversion:
var xdoc = Html2Xhtml.RunAsFilter(stdin => stdin.Write(html)).ReadToXDocument();
See http://corsis.sourceforge.net/index.php/Html2Xhtml for more information.
A: Convert from HTML to XML with HTML Tidy
Downloadable Binaries
JRoppert, For your need, i guess you might want to look at the Sources
c:\temp>tidy -help
tidy [option...] [file...] [option...] [file...]
Utility to clean up and pretty print HTML/XHTML/XML
see http://tidy.sourceforge.net/
Options for HTML Tidy for Windows released on 14 February 2006:
File manipulation
-----------------
-output <file>, -o write output to the specified <file>
<file>
-config <file> set configuration options from the specified <file>
-file <file>, -f write errors to the specified <file>
<file>
-modify, -m modify the original input files
Processing directives
---------------------
-indent, -i indent element content
-wrap <column>, -w wrap text at the specified <column>. 0 is assumed if
<column> <column> is missing. When this option is omitted, the
default of the configuration option "wrap" applies.
-upper, -u force tags to upper case
-clean, -c replace FONT, NOBR and CENTER tags by CSS
-bare, -b strip out smart quotes and em dashes, etc.
-numeric, -n output numeric rather than named entities
-errors, -e only show errors
-quiet, -q suppress nonessential output
-omit omit optional end tags
-xml specify the input is well formed XML
-asxml, -asxhtml convert HTML to well formed XHTML
-ashtml force XHTML to well formed HTML
-access <level> do additional accessibility checks (<level> = 0, 1, 2, 3).
0 is assumed if <level> is missing.
Character encodings
-------------------
-raw output values above 127 without conversion to entities
-ascii use ISO-8859-1 for input, US-ASCII for output
-latin0 use ISO-8859-15 for input, US-ASCII for output
-latin1 use ISO-8859-1 for both input and output
-iso2022 use ISO-2022 for both input and output
-utf8 use UTF-8 for both input and output
-mac use MacRoman for input, US-ASCII for output
-win1252 use Windows-1252 for input, US-ASCII for output
-ibm858 use IBM-858 (CP850+Euro) for input, US-ASCII for output
-utf16le use UTF-16LE for both input and output
-utf16be use UTF-16BE for both input and output
-utf16 use UTF-16 for both input and output
-big5 use Big5 for both input and output
-shiftjis use Shift_JIS for both input and output
-language <lang> set the two-letter language code <lang> (for future use)
Miscellaneous
-------------
-version, -v show the version of Tidy
-help, -h, -? list the command line options
-xml-help list the command line options in XML format
-help-config list all configuration options
-xml-config list all configuration options in XML format
-show-config list the current configuration settings
Use --blah blarg for any configuration option "blah" with argument "blarg"
Input/Output default to stdin/stdout respectively
Single letter options apart from -f may be combined
as in: tidy -f errs.txt -imu foo.html
For further info on HTML see http://www.w3.org/MarkUp
A: http://corsis.sourceforge.net/index.php/Html2Xhtmlhttp://corsis.sourceforge.net/index.php/Html2Xhtml
Html2Xhtml is a .NET 4.0 library for converting HTML to XHTML licensed under GPLv2 or above.
I tested Html2Xhtml in the local reconstruction of a large online database of the European Union. Tidy/Tidy.NET would not even produce valid output most of the time, Chilkat's HTML-to-XML was a bit slow and produced strange results (misplaced, missing, unexplainable elements). In attempt to find a free, fast and reliable conversion tool I created this library. It converts 2 - 4x faster than all other libraries I tested.
Html2Xhtml, combined with the power of LINQ to XML, is an excellent tool for all large-scale data extraction and web crawling scenarios.
A: you can convert html to xhtml with tidy executable file:
tidy -asxhtml -numeric < index.html > index.xhml
you can check the c# implementation here.
A: The easiest way is to set your Visual Studio IDE to identify the changes you need to make.
You can do this in Visual Studio 2008 by going to:
Tools, Options, Text Editor, HTML, Validation and choosing the appropriate target.
Possibly XHTML 1.1 or XHTML 1.0 Transitional.
For some information on the different types, read:
http://msdn.microsoft.com/en-us/library/aa479043.aspx
Then you need to work through the points highlighted on your page.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: VB FFT - stuck understanding relationship of results to frequency Trying to understand an fft (Fast Fourier Transform) routine I'm using (stealing)(recycling)
Input is an array of 512 data points which are a sample waveform.
Test data is generated into this array. fft transforms this array into frequency domain.
Trying to understand relationship between freq, period, sample rate and position in fft array. I'll illustrate with examples:
========================================
Sample rate is 1000 samples/s.
Generate a set of samples at 10Hz.
Input array has peak values at arr(28), arr(128), arr(228) ...
period = 100 sample points
peak value in fft array is at index 6 (excluding a huge value at 0)
========================================
Sample rate is 8000 samples/s
Generate set of samples at 440Hz
Input array peak values include arr(7), arr(25), arr(43), arr(61) ...
period = 18 sample points
peak value in fft array is at index 29 (excluding a huge value at 0)
========================================
How do I relate the index of the peak in the fft array to frequency ?
A: If you ignore the imaginary part, the frequency distribution is linear across bins:
Frequency@i = (Sampling rate/2)*(i/Nbins).
So for your first example, assumming you had 256 bins, the largest bin corresponds to a frequency of 1000/2 * 6/256 = 11.7 Hz.
Since your input was 10Hz, I'd guess that bin 5 (9.7Hz) also had a big component.
To get better accuracy, you need to take more samples, to get smaller bins.
Your second example gives 8000/2*29/256 = 453Hz. Again, close, but you need more bins.
Your resolution here is only 4000/256 = 15.6Hz.
A: It would be helpful if you were to provide your sample dataset.
My guess would be that you have what are called sampling artifacts. The strong signal at DC ( frequency 0 ) suggests that this is the case.
You should always ensure that the average value in your input data is zero - find the average and subtract it from each sample point before invoking the fft is good practice.
Along the same lines, you have to be careful about the sampling window artifact. It is important that the first and last data point are close to zero because otherwise the "step" from outside to inside the sampling window has the effect of injecting a whole lot of energy at different frequencies.
The bottom line is that doing an fft analysis requires more care than simply recycling a fft routine found somewhere.
Here are the first 100 sample points of a 10Hz signal as described in the question, massaged to avoid sampling artifacts
> sinx[1:100]
[1] 0.000000e+00 6.279052e-02 1.253332e-01 1.873813e-01 2.486899e-01 3.090170e-01 3.681246e-01 4.257793e-01 4.817537e-01 5.358268e-01
[11] 5.877853e-01 6.374240e-01 6.845471e-01 7.289686e-01 7.705132e-01 8.090170e-01 8.443279e-01 8.763067e-01 9.048271e-01 9.297765e-01
[21] 9.510565e-01 9.685832e-01 9.822873e-01 9.921147e-01 9.980267e-01 1.000000e+00 9.980267e-01 9.921147e-01 9.822873e-01 9.685832e-01
[31] 9.510565e-01 9.297765e-01 9.048271e-01 8.763067e-01 8.443279e-01 8.090170e-01 7.705132e-01 7.289686e-01 6.845471e-01 6.374240e-01
[41] 5.877853e-01 5.358268e-01 4.817537e-01 4.257793e-01 3.681246e-01 3.090170e-01 2.486899e-01 1.873813e-01 1.253332e-01 6.279052e-02
[51] -2.542075e-15 -6.279052e-02 -1.253332e-01 -1.873813e-01 -2.486899e-01 -3.090170e-01 -3.681246e-01 -4.257793e-01 -4.817537e-01 -5.358268e-01
[61] -5.877853e-01 -6.374240e-01 -6.845471e-01 -7.289686e-01 -7.705132e-01 -8.090170e-01 -8.443279e-01 -8.763067e-01 -9.048271e-01 -9.297765e-01
[71] -9.510565e-01 -9.685832e-01 -9.822873e-01 -9.921147e-01 -9.980267e-01 -1.000000e+00 -9.980267e-01 -9.921147e-01 -9.822873e-01 -9.685832e-01
[81] -9.510565e-01 -9.297765e-01 -9.048271e-01 -8.763067e-01 -8.443279e-01 -8.090170e-01 -7.705132e-01 -7.289686e-01 -6.845471e-01 -6.374240e-01
[91] -5.877853e-01 -5.358268e-01 -4.817537e-01 -4.257793e-01 -3.681246e-01 -3.090170e-01 -2.486899e-01 -1.873813e-01 -1.253332e-01 -6.279052e-02
And here is the resulting absolute values of the fft frequency domain
[1] 7.160038e-13 1.008741e-01 2.080408e-01 3.291725e-01 4.753899e-01 6.653660e-01 9.352601e-01 1.368212e+00 2.211653e+00 4.691243e+00 5.001674e+02
[12] 5.293086e+00 2.742218e+00 1.891330e+00 1.462830e+00 1.203175e+00 1.028079e+00 9.014559e-01 8.052577e-01 7.294489e-01
A: It's been some time since I've done FFT's but here's what I remember
FFT usually takes complex numbers as input and output. So I'm not really sure how the real and imaginary part of the input and output map to the arrays.
I don't really understand what you're doing. In the first example you say you process sample buffers at 10Hz for a sample rate of 1000 Hz? So you should have 10 buffers per second with 100 samples each. I don't get how your input array can be at least 228 samples long.
Usually the first half of the output buffer are frequency bins from 0 frequency (=dc offset) to 1/2 sample rate. and the 2nd half are negative frequencies. if your input is only real data with 0 for the imaginary signal positive and negative frequencies are the same. The relationship of real/imaginary signal on the output contains phase information from your input signal.
A: I'm a little rusty too on math and signal processing but with the additional info I can give it a shot.
If you want to know the signal energy per bin you need the magnitude of the complex output. So just looking at the real output is not enough. Even when the input is only real numbers. For every bin the magnitude of the output is sqrt(real^2 + imag^2), just like pythagoras :-)
bins 0 to 449 are positive frequencies from 0 Hz to 500 Hz. bins 500 to 1000 are negative frequencies and should be the same as the positive for a real signal. If you process one buffer every second frequencies and array indices line up nicely. So the peak at index 6 corresponds with 6Hz so that's a bit strange. This might be because you're only looking at the real output data and the real and imaginary data combine to give an expected peak at index 10. The frequencies should map linearly to the bins.
The peaks at 0 indicates a DC offset.
A: The frequency for bin i is i * (samplerate / n), where n is the number of samples in the FFT's input window.
If you're handling audio, since pitch is proportional to log of frequency, the pitch resolution of the bins increases as the frequency does -- it's hard to resolve low frequency signals accurately. To do so you need to use larger FFT windows, which reduces time resolution. There is a tradeoff of frequency against time resolution for a given sample rate.
You mention a bin with a large value at 0 -- this is the bin with frequency 0, i.e. the DC component. If this is large, then presumably your values are generally positive. Bin n/2 (in your case 256) is the Nyquist frequency, half the sample rate, which is the highest frequency that can be resolved in the sampled signal at this rate.
If the signal is real, then bins n/2+1 to n-1 will contain the complex conjugates of bins n/2-1 to 1, respectively. The DC value only appears once.
A: The samples are, as others have said, equally spaced in the frequency domain (not logarithmic).
For example 1, you should get this:
alt text http://home.comcast.net/~kootsoop/images/SINE1.jpg
For the other example you should get
alt text http://home.comcast.net/~kootsoop/images/SINE2.jpg
So your answers both appear to be correct regarding the peak location.
What I'm not getting is the large DC component. Are you sure you are generating a sine wave as the input? Does the input go negative? For a sinewave, the DC should be close to zero provided you get enough cycles.
A: Another avenue is to craft a Goertzel's Algorithm of each note center frequency you are looking for.
Once you get one implementation of the algorithm working you can make it such that it takes parameters to set it's center frequency. With that you could easily run 88 of them or what ever you need in a collection and scan for the peak value.
The Goertzel Algorithm is basically a single bin FFT. Using this method you can place your bins logarithmically as musical notes naturally go.
Some pseudo code from Wikipedia:
s_prev = 0
s_prev2 = 0
coeff = 2*cos(2*PI*normalized_frequency);
for each sample, x[n],
s = x[n] + coeff*s_prev - s_prev2;
s_prev2 = s_prev;
s_prev = s;
end
power = s_prev2*s_prev2 + s_prev*s_prev - coeff*s_prev2*s_prev;
The two variables representing the previous two samples are maintained for the next iteration. This can be then used in a streaming application. I thinks perhaps the power calculation should be inside the loop as well. (However it is not depicted as such in the Wiki article.)
In the tone detection case there would be 88 different coeficients, 88 pairs of previous samples and would result in 88 power output samples indicating the relative level in that frequency bin.
A: WaveyDavey says that he's capturing sound from a mic, thru the audio hardware of his computer, BUT that his results are not zero-centered. This sounds like a problem with the hardware. It SHOULD BE zero-centered.
When the room is quiet, the stream of values coming from the sound API should be very close to 0 amplitude, with slight +- variations for ambient noise. If a vibratory sound is present in the room (e.g. a piano, a flute, a voice) the data stream should show a fundamentally sinusoidal-based wave that goes both positive and negative, and averages near zero. If this is not the case, the system has some funk going on!
-Rick
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: PHP Class Database Connection Scope Issue For a new project that I'm doing in PHP I've created an SQLMethods class to connect to the database and perform queries. Tonight was the first night that I actually got to test it (I wrote it a week or so ago and forgot about it) and an unexpected error occured: When it was calling my ExecuteQuery() function, it wouldn't use the database I selected in the constructor.
The constructor:
public function SQLMethods() {
$SQLConnection = mysql_connect($SQLDBAddress, $SQLUserName, $SQLPassword);
if (!$SQLConnection) {
die('Could not connect: ' . mysql_error());
}
mysql_select_db($SQLDB, $SQLConnection);
}
The function in question:
public function ExecuteQuery($Query) {
mysql_query($Query, $SQLConnection) or die('Could not perform query: ' . mysql_error());
}
Does anyone see what the issue might be? Does the connection close after the constructor completes?
A: you should declare $SQLConnection in your class, and you should refer to it as
$this->SQLConnection
and not simply $SQLConnection.
A: $SQLConnection doesn't exist within the ExecuteQuery method.
You can either pass it directly as a parameter to ExecuteQuery, or add an sqlConnection class property that is set in the constructor and accessed as $this->sqlConnection inside your class methods.
A: The variable $SQLConnection ExecuteQuery() is trying to use is created within another scope. (The SQLMethods function).
The connection closes when the PHP script has done its work or if you close it yourself (if the connection is made within that script)
You should skip the $SQLConnection variable within ExecuteQuery as stated by the php.net documentation
If the link identifier is not specified, the last link opened by mysql_connect() is assumed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Compress Script Resources of ASP.Net How do you compress Script Resources of ASP.Net? I saw a file there reached up to 255 KB! I tried finding solutions, but so far it only talks about scripting dynamic and static files. I checked the compression temp folder of IIS and found no compressed scripted resource there. That led me to the conclusion that these files are transferred over with high bandwidth.
A: If you're running IIS6 the guys at OrcsWeb have a nice wee article -
http://weblogs.asp.net/owscott/archive/2004/01/12/57916.aspx
We have customers running the port80 software because they get more control:
http://www.port80software.com/products/zipenable/
http://www.port80software.com/products/httpzip/
A: I don't know about tying something into ASP.net but there are a number of standalone compressors. Packer is available as a .Net app. JSMin is available in a number of languages but none of them .Net, and there's ShrinkSafe which requires java. It should be pretty simple to tie any of them into your build process.
A: the best would be to implement an httpHandler in the web.Config.
see http://blog.madskristensen.dk/post/Optimize-WebResourceaxd-and-ScriptResourceaxd.aspx
A: Check out the JSCompress task in the MSBuild Community Tasks (http://msbuildtasks.tigris.org/). It'll strip out the whitespace from a JS file for you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Permission in Visual Studio 2008 reports I have successfully added a custom assembly, added it to the report using AddTrustedCodeModuleInCurrentAppDomain. I am executing the report in the current appdomain.
When I try to access SQL I get reporting services System.Data.SqlClient.SqlClientPermission failed. I have tried adding System.Data to the trusted assemblies as above but it doesn't help.
How do I ensure that this permission is present?
A: From the error is sounds like your login credentials are being lost. What kind of authorization are you using?
If you are using Windows Authorization, you may be losing your "you"ness. Doesn't sound likely as you are calling in from the current appdomain. You may want to change the connection string to specify a username and password. If nothing else it should help in the debugging.
Good luck!
A: Any luck with this?
I am using the ReportViewer control from Visual Studio 2008 in Local Mode with objects as the data source. My classes are mapped to data tables in my database. In the objects, it loads related objects as needed. So it leaves the reference null until you try to use the property, then it tries to load it from the database automatically. The classes use the System.Data.SqlClient namespace.
When I interact with the objects in my Windows Forms application, everything works as expected. But when I pass the object to be used as a Report Data Source and it tries to automatically load the related object, it fails. The code creates a SqlConnection object and when I call GetCommand() on it, the following exception is thrown:
[System.Security.SecurityException] {
"Request for the permission of type 'System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed."
} System.Security.SecurityException
I've tried searching for the error, but all the results that show up are for CLR assemblies running on a SQL Server or ASP.Net. I've tried adding the following call in my code (as suggested in the search results) before creating the SqlConnection objects, but it didn't apparently do anything:
System.Data.SqlClient.SqlClientPermission(System.Security.Permissions.PermissionState.Unrestricted).Assert();
A: I've found the solution. You specify System.Security.Policy.Evidence of you executing assembly (or one that has sufficient rights) to the LocalReport for use during execution.
reportViewer.LocalReport.ExecuteReportInCurrentAppDomain(System.Reflection.Assembly.GetExecutingAssembly().Evidence);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I create database tables from XSD files? I have a set of XSDs from which I generate data access classes, stored procedures and more.
What I don't have is a way to generate database table from these - is there a tool that will generate the DDL statements for me?
This is not the same as Create DB table from dataset table, as I do not have dataset tables, but XSDs.
A: I use XSLT to do that.
Write up your XSD then pass your data models through a hand written XSLT that outputs SQL commands. Writing an XSLT is way faster and reusable than a custom program /script you may write.
At least thats how I do it at work, and thanks to that I got time to hang out on SO :)
A: The best way to create the SQL database schema using an XSD file is a program called Altova XMLSpy, this is very simple:
*
*Create a new project
*On the DTDs / Schemas folder right clicking and selecting add files
*Selects the XSD file
*Open the XSD file added by double-clicking
*Go to the toolbar and look Conversion
*They select Create structure database from XML schema
*Selects the data source
*And finally give it to export the route calls immediately leave their scrip schemas with SQL Server to execute the query.
Hope it helps.
A: There is a command-line tool called XSD2DB, that generates database from xsd-files, available at sourceforge.
A: XML Schemas describe hierarchial data models and may not map well to a relational data model. Mapping XSD's to database tables is very similar mapping objects to database tables, in fact you could use a framework like Castor that does both, it allows you to take a XML schema and generate classes, database tables, and data access code. I suppose there are now many tools that do the same thing, but there will be a learning curve and the default mappings will most like not be what you want, so you have to spend time customizing whatever tool you use.
XSLT might be the fastest way to generate exactly the code that you want. If it is a small schema hardcoding it might be faster than evaluating and learing a bunch of new technologies.
A: Commercial Product: Altova's XML Spy.
Note that there's no general solution to this. An XSD can easily describe something that does not map to a relational database.
While you can try to "automate" this, your XSD's must be designed with a relational database in mind, or it won't work out well.
If the XSD's have features that don't map well you'll have to (1) design a mapping of some kind and then (2) write your own application to translate the XSD's into DDL.
Been there, done that. Work for hire -- no open source available.
A: Create a Java Model using Axis wsdl2java (which can take in .xsd files).
Use a database generation tool for Java that takes in a Java Model. Surely something like Hibernate can do this? I wrote my own tool (takes a couple of days, also generates CRUD code in Java too) to save myself time at work, maybe this would be a nice personal project?
Or just do it manually so that you can check everything is correct and good! Database tools are good enough now that you can zip through creating tables for a model without too many problems.
A: Might take a look at the XSD tool in visual studio 2k8... I have created a relational dataset from an xsd and it might help your out somehow.
A: hyperjaxb (versions 2 and 3) actually generates hibernate mapping files and related entity objects and also does a round trip test for a given XSD and sample XML file. You can capture the log output and see the DDL statements for yourself.
I had to tweak them a little bit, but it gives you a basic blue print to start with.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: Saving 'tree /f /a" results to a textfile with unicode support I'm trying to use the tree command in a windows commandline to generate a text file listing the contents of a directory but when I pipe the output the unicode characters get stuffed up.
Here is the command I am using:
tree /f /a > output.txt
The results in the console window are fine:
\---Erika szobája
cover.jpg
Erika szobája.m3u
Kátai Tamás - 01 Télvíz.ogg
Kátai Tamás - 02 Zölderdõ.ogg
Kátai Tamás - 03 Renoir kertje.ogg
Kátai Tamás - 04 Esõben szaladtál.ogg
Kátai Tamás - 05 Ázik az út.ogg
Kátai Tamás - 06 Sûrû völgyek takaród.ogg
Kátai Tamás - 07 Õszhozó.ogg
Kátai Tamás - 08 Mécsvilág.ogg
Kátai Tamás - 09 Zúzmara.ogg
But the text file is no good:
\---Erika szob ja
cover.jpg
Erika szob ja.m3u
K tai Tam s - 01 T‚lv¡z.ogg
K tai Tam s - 02 Z”lderdä.ogg
K tai Tam s - 03 Renoir kertje.ogg
K tai Tam s - 04 Esäben szaladt l.ogg
K tai Tam s - 05 µzik az £t.ogg
K tai Tam s - 06 S–r– v”lgyek takar¢d.ogg
K tai Tam s - 07 åszhoz¢.ogg
K tai Tam s - 08 M‚csvil g.ogg
K tai Tam s - 09 Z£zmara.ogg
How can I fix this? Ideally the text file would be exactly the same as the output in the console window.
I tried Chris Jester-Young's suggestion (what happened, did you delete it Chris?) of running the command line with the /U switch, it looked like exactly what I needed but it does not appear to work. I have tried opening the file in both VS2008 and notepad and both show the same incorrect characters.
A: Have someone already tried this:
tree /f /a |clip
Open notepad, ctrl + V, save in notepad as output.txt with unicode support?
A: I decided I had to have a look at tree.com and figure out why it's not respecting the Unicode setting of the console. It turns out that (like many of the command-line file utilities), it uses a library called ulib.dll to do all the printing (specifically, TREE::DisplayName calls WriteString in ulib).
Now, in ulib, the WriteString method is implemented in two classes, SCREEN and STREAM. The SCREEN version uses WriteConsoleW directly, so all the Unicode characters get correctly displayed. The STREAM version converts the Unicode text to one of three different encodings (_UseConsoleConversions ⇒ console codepage (GetConsoleCP), _UseAnsiConversions ⇒ default ANSI codepage, otherwise ⇒ default OEM codepage), and then writes this out. I don't know how to change the conversion mode, and I don't believe the conversion can be disabled.
I've only looked at this briefly, so perhaps more adventurous souls can speak more about it! :-)
A: This will save the results as ASCII (American Standard Code for Information Interchange) on your desktop, ASCII\ANSI doesn't recognize every international or extended character:
tree /f > ascii.txt
This will convert your ASCII text to Unicode (/c must precede actual command):
cmd /u /c type ascii.txt > unicode.txt
So why not just think of the ascii file as a temporary file and delete it?
del ascii.txt
If you must put all in one line you could use:
tree /f > ascii.txt & cmd.exe /u /c type ascii.txt > unicode.txt & del ascii.txt
A: The short answer is you cannot and this is because tree.com is an ANSI application, even on Windows 7.
The only solution is to write your own tree implementation. Also you could file a bug to Microsoft, but I doubt they are not already aware about it.
A: You can try
tree /A > output.txt
Though it looks different from the CMD line, it still could be acceptable. :P
A: This worked for me:
tree /f /a > %temp%\Listing >> files.txt
A: XP1's answer is great, but had a minor caveat: the output encoding is UCS2-LE, while I'd prefer UTF8 (smaller filesize, and more widespread).
After a lot of searching and head scratching, I can finally present you the following command, that produces an UTF8-BOM file:
PowerShell -Command "TREE /F | Out-File output.txt -Encoding utf8"
If the output filename has spaces:
PowerShell -Command "TREE /F | Out-File ""output file.txt"" -Encoding utf8"
Many thanks to this article: https://www.kongsli.net/2012/04/20/powershell-gotchas-redirect-to-file-encodes-in-unicode/
Also, personally I have created the following files in my PATH:
xtree.cmd:
@IF [%1]==[] @(
ECHO You have to specify an output file.
GOTO :EOF
)
@PowerShell -Command "TREE | Out-File %1 -Encoding utf8"
xtreef.cmd:
@IF [%1]==[] @(
ECHO You have to specify an output file.
GOTO :EOF
)
@PowerShell -Command "TREE /F | Out-File %1 -Encoding utf8"
Finally, instead of tree > output.txt I just do xtree output.txt
A: Use PowerShell:
powershell -command "tree /f > tree.txt"
Test case:
create.ps1:
mkdir "Erika szobája"
$null | Set-Content "Erika szobája/cover.jpg"
$null | Set-Content "Erika szobája/Erika szobája.m3u"
$null | Set-Content "Erika szobája/Kátai Tamás - 01 Télvíz.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 02 Zölderdõ.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 03 Renoir kertje.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 04 Esõben szaladtál.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 05 Ázik az út.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 06 Sûrû völgyek takaród.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 07 Õszhozó.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 08 Mécsvilág.ogg"
$null | Set-Content "Erika szobája/Kátai Tamás - 09 Zúzmara.ogg"
Output:
tree.txt:
Folder PATH listing
Volume serial number is 00000000 0000:0000
C:.
│ create.ps1
│ tree.txt
│
└───Erika szobája
cover.jpg
Erika szobája.m3u
Kátai Tamás - 01 Télvíz.ogg
Kátai Tamás - 02 Zölderdo.ogg
Kátai Tamás - 03 Renoir kertje.ogg
Kátai Tamás - 04 Esoben szaladtál.ogg
Kátai Tamás - 05 Azik az út.ogg
Kátai Tamás - 06 Sûrû völgyek takaród.ogg
Kátai Tamás - 07 Oszhozó.ogg
Kátai Tamás - 08 Mécsvilág.ogg
Kátai Tamás - 09 Zúzmara.ogg
EDIT:
Enhanced and improved version for power users
Test case:
$null | Set-Content "欲速则不达.txt"
$null | Set-Content "爱不是占有,是欣赏.txt"
$null | Set-Content "您先请是礼貌.txt"
$null | Set-Content "萝卜青菜,各有所爱.txt"
$null | Set-Content "广交友,无深交.txt"
$null | Set-Content "一见钟情.txt"
$null | Set-Content "山雨欲来风满楼.txt"
$null | Set-Content "悪妻は百年の不作。.txt"
$null | Set-Content "残り物には福がある。.txt"
$null | Set-Content "虎穴に入らずんば虎子を得ず。.txt"
$null | Set-Content "夏炉冬扇.txt"
$null | Set-Content "花鳥風月.txt"
$null | Set-Content "起死回生.txt"
$null | Set-Content "自業自得.txt"
$null | Set-Content "아는 길도 물어가라.txt"
$null | Set-Content "빈 수레가 요란하다.txt"
$null | Set-Content "방귀뀐 놈이 성낸다.txt"
$null | Set-Content "뜻이 있는 곳에 길이 있다.txt"
$null | Set-Content "콩 심은데 콩나고, 팥 심은데 팥난다.txt"
From his answer, @Chris Jester-Young wrote:
Now, in ulib, the WriteString method is implemented in two
classes, SCREEN and STREAM. The SCREEN version uses
WriteConsoleW directly, so all the Unicode characters get correctly
displayed. The STREAM version converts the Unicode text to one of
three different encodings (_UseConsoleConversions ⇒ console codepage
(GetConsoleCP), _UseAnsiConversions ⇒ default ANSI codepage,
otherwise ⇒ default OEM codepage), and then writes this out.
This means that we cannot rely on getting the characters from a stream. File redirections won't work. We have to rely on writing to the console to get the Unicode characters.
The workaround, or hack, is to write the tree to the console and then dump the buffer to a file.
I have written the scripts to add the tree context menu when you right click on directories in Explorer. Save the files in the same directory and then run Install list menu.bat as administrator to install.
Install list menu.bat
@echo on
regedit /s "List files.reg"
copy "List.ps1" "%SystemRoot%"
pause
List files.reg
Windows Registry Editor Version 5.00
; Directory.
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Shell\List]
"MUIVerb"="List"
"ExtendedSubCommandsKey"="Directory\\ContextMenus\\List"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\Files]
"MUIVerb"="Files"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\Files\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'files' -directory '%1'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,66,00,69,00,6c,00,65,00,73,00,27,00,20,00,2d,00,\
64,00,69,00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,31,\
00,27,00,00,00
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\FilesRecursively]
"MUIVerb"="Files recursively"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\FilesRecursively\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'filesRecursively' -directory '%1'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,66,00,69,00,6c,00,65,00,73,00,52,00,65,00,63,00,\
75,00,72,00,73,00,69,00,76,00,65,00,6c,00,79,00,27,00,20,00,2d,00,64,00,69,\
00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,31,00,27,00,\
00,00
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\Tree]
"MUIVerb"="Tree"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\ContextMenus\List\Shell\Tree\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'tree' -directory '%1'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,74,00,72,00,65,00,65,00,27,00,20,00,2d,00,64,00,\
69,00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,31,00,27,\
00,00,00
; Directory background.
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\Shell\List]
"MUIVerb"="List"
"ExtendedSubCommandsKey"="Directory\\Background\\ContextMenus\\List"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\Files]
"MUIVerb"="Files"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\Files\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'files' -directory '%V'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,66,00,69,00,6c,00,65,00,73,00,27,00,20,00,2d,00,\
64,00,69,00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,56,\
00,27,00,00,00
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\FilesRecursively]
"MUIVerb"="Files recursively"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\FilesRecursively\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'filesRecursively' -directory '%V'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,66,00,69,00,6c,00,65,00,73,00,52,00,65,00,63,00,\
75,00,72,00,73,00,69,00,76,00,65,00,6c,00,79,00,27,00,20,00,2d,00,64,00,69,\
00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,56,00,27,00,\
00,00
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\Tree]
"MUIVerb"="Tree"
[HKEY_LOCAL_MACHINE\Software\Classes\Directory\Background\ContextMenus\List\Shell\Tree\Command]
; powershell -executionPolicy bypass "%SystemRoot%\List.ps1" -type 'tree' -directory '%V'
@=hex(2):70,00,6f,00,77,00,65,00,72,00,73,00,68,00,65,00,6c,00,6c,00,20,00,2d,\
00,65,00,78,00,65,00,63,00,75,00,74,00,69,00,6f,00,6e,00,50,00,6f,00,6c,00,\
69,00,63,00,79,00,20,00,62,00,79,00,70,00,61,00,73,00,73,00,20,00,22,00,25,\
00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,\
4c,00,69,00,73,00,74,00,2e,00,70,00,73,00,31,00,22,00,20,00,2d,00,74,00,79,\
00,70,00,65,00,20,00,27,00,74,00,72,00,65,00,65,00,27,00,20,00,2d,00,64,00,\
69,00,72,00,65,00,63,00,74,00,6f,00,72,00,79,00,20,00,27,00,25,00,56,00,27,\
00,00,00
List.ps1
function sortNaturally {
[Regex]::replace($_, '\d+', {
$args[0].value.padLeft(20)
})
}
function writeList {
param(
[parameter(mandatory = $true)]
[string] $text = $null
)
$filePath = "$env:temp\List.txt"
$text > "$filePath"
notepad "$filePath" | out-null
del "$filePath"
}
function listFiles {
param(
[switch] $recurse = $false
)
get-childItem -name -recurse:$recurse -force | sort-object $function:sortNaturally | out-string
}
function listTree {
tree /f
}
function getBufferText {
$rawUi = $host.ui.rawUi
$width = [Math]::max([Math]::max($rawUi.bufferSize.width, $rawUi.windowSize.width) - 1, 0)
$height = [Math]::max($rawUi.cursorPosition.y - 1, 0)
$lines = new-object System.Text.StringBuilder
$characters = new-object System.Text.StringBuilder
for ($h = 0; $h -lt $height; $h += 1) {
$rectangle = new-object System.Management.Automation.Host.Rectangle 0, $h, $width, $h
$buffer = $rawUi.getBufferContents($rectangle)
for ($w = 0; $w -lt $width; $w += 1) {
$cell = $buffer[0, $w]
$character = $cell.character
$characters.append($character) | out-null
}
$lines.appendLine($characters.toString()) | out-null
$characters.length = 0
}
$lines.toString() -replace '[ \0]*\r?\n', "`r`n"
}
function main {
param(
[parameter(mandatory = $true)]
[string] $type = $null,
[parameter(mandatory = $true)]
[string] $directory = $null
)
$outputEncoding = [Text.UTF8Encoding]::UTF8
[Console]::outputEncoding = [Text.UTF8Encoding]::UTF8
$PSDefaultParameterValues['out-file:encoding'] = 'utf8'
set-location -literalPath "$directory"
$typeFunction = @{
'files' = { writeList -text $(listFiles) };
'filesRecursively' = { writeList -text $(listFiles -recurse) };
'tree' = {
listTree
writeList -text $(getBufferText)
}
}
&($typeFunction.get_item($type))
}
main @args
A: If you output as non-Unicode (which you apparently do), you have to view the text file you create using the same encoding the Console window uses. That's why it looks correct in the console. In some text editors, you can choose an encoding (or "code page") when you open a file. (How to output as Unicode I don't know. cmd /U doesn't do what the documentation says.)
The Console encoding depends on your Windows installation. For me, it's "Western European (DOS)" (or just "MS-DOS") in Microsoft Word.
A: I've managed to properly output non-ascii characters from tree command into a file via Take Command Console.
In TCC type "option" and on first tab select "Unicode output". Then simply run
tree /f /a > output.txt
A: I've succeeded getting the output as it is in console, with all non-ascii characters not converted, by outputting to the console (just tree) and then copying from it (system menu -> Edit -> Mark, selecting all, Enter). Console buffer size should be increased in advance, depending on number files/folders, in the console's properties (system menu -> Properties). Other ways didn't work. tree|clip, mentioned in an earlier post, converts non-ascii characters to ascii ones the same as tree>file.txt.
A: I used this method to catalog nearly 100 SDRAM and USB flashdrives and it worked fine.
From within DOS....
C:\doskey [enter] {to enable handy keyboard shortcuts}
C:\tree j:\ >> d:\MyCatalog.txt /a [enter] {j:= is my USB drive ; d:= is where I want catalog ; /a = see other postings on this page}
A: XP1's answer is great, or at least the best here—tree command just can't handle some symbols even with this treatment.
However, at least on my setup, this script will generate trees missing the final line. While I'm unsure why exactly this happens, it can be fixed by modifying function listTree to look like this:
function listTree {
tree /f
echo .
}
Where echo . prints a line for PowerShell to cannibalize freely. This sacrifice sates it and the tree is output in entirety.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: Initializing a static std::map in C++ What is the right way of initializing a static map? Do we need a static function that will initialize it?
A: This is similar to PierreBdR, without copying the map.
#include <map>
using namespace std;
bool create_map(map<int,int> &m)
{
m[1] = 2;
m[3] = 4;
m[5] = 6;
return true;
}
static map<int,int> m;
static bool _dummy = create_map (m);
A: Using C++11:
#include <map>
using namespace std;
map<int, char> m = {{1, 'a'}, {3, 'b'}, {5, 'c'}, {7, 'd'}};
Using Boost.Assign:
#include <map>
#include "boost/assign.hpp"
using namespace std;
using namespace boost::assign;
map<int, char> m = map_list_of (1, 'a') (3, 'b') (5, 'c') (7, 'd');
A: Here is another way that uses the 2-element data constructor. No functions are needed to initialize it. There is no 3rd party code (Boost), no static functions or objects, no tricks, just simple C++:
#include <map>
#include <string>
typedef std::map<std::string, int> MyMap;
const MyMap::value_type rawData[] = {
MyMap::value_type("hello", 42),
MyMap::value_type("world", 88),
};
const int numElems = sizeof rawData / sizeof rawData[0];
MyMap myMap(rawData, rawData + numElems);
Since I wrote this answer C++11 is out. You can now directly initialize STL containers using the new initializer list feature:
const MyMap myMap = { {"hello", 42}, {"world", 88} };
A: For example:
const std::map<LogLevel, const char*> g_log_levels_dsc =
{
{ LogLevel::Disabled, "[---]" },
{ LogLevel::Info, "[inf]" },
{ LogLevel::Warning, "[wrn]" },
{ LogLevel::Error, "[err]" },
{ LogLevel::Debug, "[dbg]" }
};
If map is a data member of a class, you can initialize it directly in header by the following way (since C++17):
// Example
template<>
class StringConverter<CacheMode> final
{
public:
static auto convert(CacheMode mode) -> const std::string&
{
// validate...
return s_modes.at(mode);
}
private:
static inline const std::map<CacheMode, std::string> s_modes =
{
{ CacheMode::All, "All" },
{ CacheMode::Selective, "Selective" },
{ CacheMode::None, "None" }
// etc
};
};
A: I would wrap the map inside a static object, and put the map initialisation code in the constructor of this object, this way you are sure the map is created before the initialisation code is executed.
A: Just wanted to share a pure C++ 98 work around:
#include <map>
std::map<std::string, std::string> aka;
struct akaInit
{
akaInit()
{
aka[ "George" ] = "John";
aka[ "Joe" ] = "Al";
aka[ "Phil" ] = "Sue";
aka[ "Smitty" ] = "Yando";
}
} AkaInit;
A: In addition to the good top answer of using
const std::map<int, int> m = {{1,1},{4,2},{9,3},{16,4},{32,9}}
there's an additional possibility by directly calling a lambda that can be useful in a few cases:
const std::map<int, int> m = []()->auto {
std::map<int, int> m;
m[1]=1;
m[4]=2;
m[9]=3;
m[16]=4;
m[32]=9;
return m;
}();
Clearly a simple initializer list is better when writing this from scratch with literal values, but it does open up additional possibilities:
const std::map<int, int> m = []()->auto {
std::map<int, int> m;
for(int i=1;i<5;++i) m[i*i]=i;
m[32]=9;
return m;
}();
(Obviously it should be a normal function if you want to re-use it; and this does require recent C++.)
A: You can try:
std::map <int, int> mymap =
{
std::pair <int, int> (1, 1),
std::pair <int, int> (2, 2),
std::pair <int, int> (2, 2)
};
A: Best way is to use a function:
#include <map>
using namespace std;
map<int,int> create_map()
{
map<int,int> m;
m[1] = 2;
m[3] = 4;
m[5] = 6;
return m;
}
map<int,int> m = create_map();
A: It's not a complicated issue to make something similar to boost. Here's a class with just three functions, including the constructor, to replicate what boost did (almost).
template <typename T, typename U>
class create_map
{
private:
std::map<T, U> m_map;
public:
create_map(const T& key, const U& val)
{
m_map[key] = val;
}
create_map<T, U>& operator()(const T& key, const U& val)
{
m_map[key] = val;
return *this;
}
operator std::map<T, U>()
{
return m_map;
}
};
Usage:
std::map mymap = create_map<int, int >(1,2)(3,4)(5,6);
The above code works best for initialization of global variables or static members of a class which needs to be initialized and you have no idea when it gets used first but you want to assure that the values are available in it.
If say, you've got to insert elements into an existing std::map... here's another class for you.
template <typename MapType>
class map_add_values {
private:
MapType mMap;
public:
typedef typename MapType::key_type KeyType;
typedef typename MapType::mapped_type MappedType;
map_add_values(const KeyType& key, const MappedType& val)
{
mMap[key] = val;
}
map_add_values& operator()(const KeyType& key, const MappedType& val) {
mMap[key] = val;
return *this;
}
void to (MapType& map) {
map.insert(mMap.begin(), mMap.end());
}
};
Usage:
typedef std::map<int, int> Int2IntMap;
Int2IntMap testMap;
map_add_values<Int2IntMap>(1,2)(3,4)(5,6).to(testMap);
See it in action with GCC 4.7.2 here: http://ideone.com/3uYJiH
############### EVERYTHING BELOW THIS IS OBSOLETE #################
EDIT: The map_add_values class below, which was the original solution I had suggested, would fail when it comes to GCC 4.5+. Please look at the code above for how to add values to existing map.
template<typename T, typename U>
class map_add_values
{
private:
std::map<T,U>& m_map;
public:
map_add_values(std::map<T, U>& _map):m_map(_map){}
map_add_values& operator()(const T& _key, const U& _val)
{
m_map[key] = val;
return *this;
}
};
Usage:
std::map<int, int> my_map;
// Later somewhere along the code
map_add_values<int,int>(my_map)(1,2)(3,4)(5,6);
NOTE: Previously I used a operator [] for adding the actual values. This is not possible as commented by dalle.
##################### END OF OBSOLETE SECTION #####################
A: If you are stuck with C++98 and don't want to use boost, here there is the solution I use when I need to initialize a static map:
typedef std::pair< int, char > elemPair_t;
elemPair_t elemPairs[] =
{
elemPair_t( 1, 'a'),
elemPair_t( 3, 'b' ),
elemPair_t( 5, 'c' ),
elemPair_t( 7, 'd' )
};
const std::map< int, char > myMap( &elemPairs[ 0 ], &elemPairs[ sizeof( elemPairs ) / sizeof( elemPairs[ 0 ] ) ] );
A: You have some very good answers here, but I'm to me, it looks like a case of "when all you know is a hammer"...
The simplest answer of to why there is no standard way to initialise a static map, is there is no good reason to ever use a static map...
A map is a structure designed for fast lookup, of an unknown set of elements. If you know the elements before hand, simply use a C-array. Enter the values in a sorted manner, or run sort on them, if you can't do this. You can then get log(n) performance by using the stl::functions to loop-up entries, lower_bound/upper_bound. When I have tested this previously they normally perform at least 4 times faster than a map.
The advantages are many fold...
- faster performance (*4, I've measured on many CPU's types, it's always around 4)
- simpler debugging. It's just easier to see what's going on with a linear layout.
- Trivial implementations of copy operations, should that become necessary.
- It allocates no memory at run time, so will never throw an exception.
- It's a standard interface, and so is very easy to share across, DLL's, or languages, etc.
I could go on, but if you want more, why not look at Stroustrup's many blogs on the subject.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "516"
}
|
Q: Simple ColdFusion script works in IE but not Firefox? I have a very simple bit of script that changes the status of an item in a MySql database - it works fine in IE7, but if I try it in Firefox, it looks like it's worked, but hasn't... Which is extremely odd.
The code is very simple - first I get the details of the record I'm looking for:
<cfscript>
// Get the Product Attribute details
Arguments.qGetProductAttribute = Application.cfcProducts.getProductAttributes(Arguments.iProductAttributeID);
</cfscript>
This is working fine, if I dump the results, it's just the content of the record as expected. So then I use an if statement to change the 'active' field from one to zero or vice versa.
<!--- If Product Attribute is active, mark as inactive --->
<cfif Arguments.qGetProductAttribute.bActive EQ 0>
<cfquery name="qChangeStatus" datasource="#Request.sDSN#">
UPDATE tblProductAttributes
SET bActive = <cfqueryparam value="1" cfsqltype="CF_SQL_INTEGER" maxlength="1" />
WHERE iProductAttributeID = <cfqueryparam value="#Arguments.iProductAttributeID#" cfsqltype="CF_SQL_INTEGER" />;
</cfquery>
<!--- Else if Product Attribute is inactive, mark as active --->
<cfelseif Arguments.qGetProductAttribute.bActive EQ 1>
<cfquery name="qChangeStatus" datasource="#Request.sDSN#">
UPDATE tblProductAttributes
SET bActive = <cfqueryparam value="0" cfsqltype="CF_SQL_INTEGER" maxlength="1" />
WHERE iProductAttributeID = <cfqueryparam value="#Arguments.iProductAttributeID#" cfsqltype="CF_SQL_INTEGER" />;
</cfquery>
</cfif>
I can't see any reason whatsoever for this not to work... and indeed, in IE7 it works perfectly...
What happens is after this script is run, the browser goes back to the page that displays all of these records. For each record if the 'bActive' field is set to '1', it will display the word 'Active' and if it's set to 'zero', it will display 'Disabled'.
Simple enough.
If I run the script to disable a record, Firefox actually displays the word 'disabled' as expected, but the database record doesn't change!
I'm at a loss... how can server-side code work fine in one browser and not in another?!
A: Are you 100% certain that the database record does not change? You could get this affect if firefox calls you script twice, once before the page is rendered and once after.
So the product gets set to disabled, then after the page is sent to the browser it is updated again (and as it is already disabled, it is re-enabled).
If you have add a last update field to the database and update that every time your product is amended then you would be able to tell if this is the case.
EDIT: responding to the comments below, a quick + dirty fix would be to check the last update time stamp first and if its with in n seconds of the current time dismiss the update.
Do you have any plug-ins in firefox that maybe re-calling the page? perhaps for dev purposes? an easy test to see if its your script or a quirk in firefox would be to change your get url to a form with a post method, as the browser/ plug-in shouldn't re-call a post request.
A: I found the cause of the problem... Firebug.
I haven't the slightest idea what Firebug thinks it's doing, if I remove the 'cflocation' tag from the script (the one that takes the user back to the summary page), then it works fine. But if I keep it in, Firebug seems to run the function again before forwarding the browser to the summary page.
There's no reason for it to be doing this.
Un-bloody-belivable.
At least it won't be happening on the clients' machines.
A: This may be a browser caching issue. There's no way that straight CF code could be affected by the browser being used. What happens if you refresh the page where you're displaying the products? You also need to look at the database directly to see if the value is changing or not.
On a bit of a tangent, you can eliminate the need for an if statement at all with a little simple math.
<cfquery name="qChangeStatus" datasource="#Request.sDSN#">
UPDATE tblProductAttributes
SET
bActive = <cfqueryparam value="#val(1 - Arguments.qGetProductAttribute.bActive)#" cfsqltype="CF_SQL_INTEGER" maxlength="1" />
WHERE
iProductAttributeID = <cfqueryparam value="#Arguments.iProductAttributeID#" cfsqltype="CF_SQL_INTEGER" />;
</cfquery>
A: The code you've posted isn't the cause of the error because it's all server side code - there's nothing happening on the client in there.
I'd turn on CF debugging (including database activity) and stick a tag right after the closing tag, and before any redirection back to the product view page. Run the code and look at the SQL debug output.
My guess is that when using Firefox the block of code containing the queries just doesn't get called.
A: Try removing the semi-colon at the end of your WHERE clauses in your SQL code.
WHERE iProductAttributeID = <cfqueryparam value="#Arguments.iProductAttributeID#" cfsqltype="CF_SQL_INTEGER" />;
A: all those different answers, and none of them worked for me. I had to go to another forum where someone said it was the Skype Extension Addon in Firefox which causes ColdFusion databases to go crazy or not function. I uninstalled the Skype extension (thank you, Skype) and everything was back to normal. Hope this works for someone else too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Sql Server 2005 efficiency savings? Are there good efficiency savings using Sql Server 2005 over Sql Server 2000?
Or does it just have more services etc
Has anyone seen their system work any quicker after making the upgrade?
A: The surrounding tools such as Analysis Services were substantially rewritten and can get you a variety of wins depending on your requirements. However I don't see a lot of really fundamental changes from 2000 to 2005 in the core database engine.
There are some improvements that may get you better performance in certain situations. SQL2005 has much better support for 64-bit architectures and better table partitioning than SQL2000 (you can partition a table as opposed to making partitioned views). 64-bit support is the most likely to give you a performance win on a large system as it allows you to set up much larger caches.
Apart from those features I don't believe that there is really a large difference. There are probably minor performance tweaks.
The main reason to move from SQL2000 to SQL2005 will be when SQL2000 goes out of support. If you have a running application on SQL2000 there are not a lot of compelling reasons to switch to 2005 while 2000 is still supported by Microsoft.
Data Warehouse systems will get quite a few wins from moving to SQL2005. SSIS, SSAS2005 and SSRS2005 are much better than their SQL2000 counterparts.
A: 2005 provides MVCC - row level versioning essentially - so as a developer there are some efficiencies: less locking to worry about.
A: I haven't migrated a system from 2000 to 2005 - I've either started with one or the other - so I don't have a comparison of my own. But there is a reasonable chance you will see a perf difference; if not by taking advantage of some of the new features like snapshot isolation, then at least by virtue of the fact that SQL2005's licensing model allows you to go multi-core at no additional licensing cost, and by the fact that SQL2005 has improved memory management.
A: Things will absolutely run faster with 2005. There were several improvements made to the query optimizer. And now you can create covering indexes so that the included columns only exist at the leaf level and don't have to get sorted. That alone is an enormous improvement and reason enough to upgrade.
A: SQL 2005 does a better job of working with caching. You used to have to poll SQL 2000 periodically to check for updates to a whole table. Now you can subscribe to a notification when something changes. It also works for queries, tables, and a few other elements.
A: I would say yes for all of the reasons listed by others, but even if your SQL skills are not that strong and your queries are not that great they will probably run faster on 2005. We moved from 2000 to 2005 and we had some complex queries that we could not get properly optimized in 2000. When we moved to 2005 it ate the queries up! Clearly the optimizer was making much better decisions out of the box.
I would strongly recommend moving to 2005 unless you have no issues with 2000.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to add custom item to system menu in C++? I need to enumerate all running applications. In particular, all top windows. And for every window I need to add my custom item to the system menu of that window.
How can I accomplish that in C++?
Update.
I would be more than happy to have a solution for Windows, MacOS, and Ubuntu (though, I'm not sure if MacOS and Ubuntu have such thing as 'system menu').
A: For Windows, another way to get the top-level windows (besides EnumWindows, which uses a callback) is to get the first child of the desktop and then retrieve all its siblings:
HWND wnd = GetWindow(GetDesktopWindow(), GW_CHILD);
while (wnd) {
// handle 'wnd' here
// ...
wnd = GetNextWindow(wnd, GW_HWNDNEXT);
}
As for getting the system menu, use the GetSystemMenu function, with FALSE as the second argument. The GetMenu mentioned in the other answers returns the normal window menu.
Note, however, that while adding a custom menu item to a foreign process's window is easy, responding to the selection of that item is a bit tricky. You'll either have to inject some code to the process in order to be able to subclass the window, or install a global hook (probably a WH_GETMESSAGE or WH_CBT type) to monitor WM_SYSCOMMAND messages.
A: Once you have another window's top level handle, you may be able to call GetMenu() to retrieve the Window's system menu and then modify it, eg:
HMENU hMenu = GetMenu(hwndNext);
A: You can use EnumWindows() to enumerate top level Windows.
I don't have a specific answer for the second part of your question, but if you subclass the window, I imagine you can modify the system menu.
EDIT: or do what Chris said: call GetMenu()
A: Re: the update - please note that not even Microsoft Windows requires windows to have a sytem menu. GetMenu( ) may return 0. You'll need to intercept window creation as well, because each new top window presumably needs it too.
Also, what you propose is rather intrusive to other applications. How are you going to ensure they don't break when you modify their menus? And how are you going to ensure you suppress the messages? In particular, how will you ensure you intercept them before anyone else sees them? To quote Raymond Chen, imagine what happens if two programs would try that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to create a GET request with parameters, using JSF and navigation-rules? Is there a way to create an html link using h:outputLink, other JSF tag or code to create a non faces request (HTTP GET) with request parameters?
For example I have the following navigation-rule
<navigation-rule>
<navigation-case>
<from-outcome>showMessage</from-outcome>
<to-view-id>/showMessage.jsf</to-view-id>
<redirect/>
</navigation-case>
</navigation-rule>
In my page I would like to output the following html code:
<a href="/showMessage.jsf?msg=23">click to see the message</a>
I could just write the html code in the page, but I want to use the navigation rule in order to have all the urls defined in a single configurable file.
A: Tried using PrettyFaces? It's an open-source JSF extension designed specifically to make bookmarkable JSF pages / JSF with GET requests possible.
PrettyFaces - SEO, Dynamic Parameters, Bookmarks and Navigation for JSF / JSF2
A: This is an interesting idea. I'd be curious to know how it pans out in practice.
Getting the navigation rules
Navigation is handled by the NavigationHandler. Getting hold of the NavigationHandler isn't difficult, but the API does not expose the rules it uses.
As I see it, you can:
*
*parse faces-config.xml on initialization and store the rules in the application context (easy)
*implement your own NavigationHandler that ignores the rules in faces-config.xml or supplements them with your own rules file and exposes its ruleset somehow (workable, but takes a bit of work)
*mock your own FacesContext and pass it to the existing navigation handler (really difficult to make two FacesContext object coexist in same thread and extremely inefficient)
Now, you have another problem too. Where are you going to keep the mappings to look up the views? Hard-code them in the beans?
Using the navigation rules
Off hand, I can think of two ways you could construct parameter-containing URLs from the back-end. Both involve defining a bean of some kind.
<managed-bean>
<managed-bean-name>navBean</managed-bean-name>
<managed-bean-class>foo.NavBean</managed-bean-class>
<managed-bean-scope>application</managed-bean-scope>
</managed-bean>
Source:
package foo;
import java.io.IOException;
import java.io.Serializable;
import java.net.URLEncoder;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
public class NavBean implements Serializable {
private String getView() {
String viewId = "/showMessage.faces"; // or look this up somewhere
return viewId;
}
/**
* Regular link to page
*/
public String getUrlLink() {
FacesContext context = FacesContext.getCurrentInstance();
ExternalContext extContext = context.getExternalContext();
String viewId = getView();
String navUrl = context.getExternalContext().encodeActionURL(
extContext.getRequestContextPath() + viewId);
return navUrl;
}
/**
* Just some value
*/
public String getValue() {
return "" + System.currentTimeMillis();
}
/**
* Invoked by action
*/
public String invokeRedirect() {
FacesContext context = FacesContext.getCurrentInstance();
ExternalContext extContext = context.getExternalContext();
String viewId = getView();
try {
String charEncoding = extContext.getRequestCharacterEncoding();
String name = URLEncoder.encode("foo", charEncoding);
String value = URLEncoder.encode(getValue(), charEncoding);
viewId = extContext.getRequestContextPath() + viewId + '?' + name
+ "=" + value;
String urlLink = context.getExternalContext().encodeActionURL(
viewId);
extContext.redirect(urlLink);
} catch (IOException e) {
extContext.log(getClass().getName() + ".invokeRedirect", e);
}
return null;
}
}
GET
For a GET request, you can use the UIParameters to set the values and let the renderer build the parameter list.
<h:outputLink value="#{navBean.urlLink}">
<f:param name="foo" value="#{navBean.value}" />
<h:outputText value="get" />
</h:outputLink>
POST
If you want to set the URL to a view during a POST action, you can do it using a redirect in an action (invoked by a button or commandLink).
<h:commandLink id="myCommandLink" action="#{navBean.invokeRedirect}">
<h:outputText value="post" />
</h:commandLink>
Notes
Note that ExternalContext.encodeActionURL is used to encode the string. This is good practice for producing code that is portable across contexts (portlets, etcetera). You would use encodeResourceURL if you were encoding a link to an image or download file.
A: Have you considered a form?
<h:form>
<h:commandLink value="Click to see the message" action="#{handler.outcome}" />
<h:inputHidden id="msgId" value="#{bean.msgId}"/>
</h:form>
A: You could use a commandLink with nested param tags. This is basically the same as hubbardr said above:
<h:form>
<h:commandLink value="click here" action="${handler.outcome}">
<f:param name="msgId" value="${bean.id}" />
</h:commandLink>
</h:form>
Then in your backing bean you need to do:
Map requestMap = FacesContext.getCurrentInstance()
.getExternalContext().getRequestParameterMap();
String msgId = (String) requestMap.get("msgId");
And then do whatever you need to do.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: LabVIEW Objects I have a base class object array into which I have typecasted many different child class objects and am passing it to a sub vi. Is there any way by which I can find out the original type of the object of each individual elements in the array?
Thanks ...
A: For posterity, this was crossposted to the LAVA forums. The user Aristos Queue, one of the developers of LabVIEW's native OO features, answered with the following:
Using a dynamic dispatch method in every class is the recommended way of handling this, although the recommendation is to create a method that does whatever it is you're trying to do. I'm guessing that you're thinking of a dynamic dispatch method that returns a name or ID of the object so you can say, "Is it equal to this? Ok, then it must be this class..." and then you do Action X if it is that class. If you write a dynamic dispatch method ActionX.vi and then override it appropriately, you'll save yourself on performance and have much easier time for code maintenance in the future.
You can also use the To More Specific node to test if a given object can be downcast to a given type -- this allows for inheritance testing as opposed to the name or ID comparison that only does type equivalence. If the To More Specific node returns an error then it is not of the destination type.
So your options are (in order of preference):
*
*dynamic dispatch method that does the action
*To More Specific node to do type testing
*dynamic dispatch method that returns name/ID of the class of the object
*Get Path of LabVIEW Object.vi (shipped in vi.lib in LabVIEW 8.5 but not added to the palettes until LabVIEW 8.6)
A: NI has a good overview of LVOOP that is a must-read, since OO is implemented in a unique way for LabVIEW.
Have you tried the 'to more generic class' and 'to more specific class' functions, on the application control palette?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Hibernate global filtration criteria Is there a way to add a single criteria on a session factory(configuration) level which will apply to all entities retrieved with a Session ?
We have a requirement to not delete any rows from our database but "mark" deleted entities as such so that they will not participate in any further operations.
I know that we can just retrieve all entities through a common interface (e.g. a common base Dao object), but the approach with global filtering would be less error-prone as it doesn't require the knowledge of using this common interface.
A: The Hibernate Documentation elaborates on this a little more. It looks like the best way to handle this is with Filters.
First, you define a fitler on a class or collection like so:
<filter-def name="IsDeletedFilter">
<filter-param name="IsDeleted" type="bool"/>
</filter-def>
Then you attach it to a class or collection:
<class/set ...>
...
<filter name="IsDeletedFilter" condition=":IsDeleted = Is_Deleted"/>
Alas, you must do this with every class you want soft-deleted. But you may be able to map out a base class SoftDeleteEntity and have a table per subclass hierarchy.
I have also seen some people claim to be able to do this with listeners, though I know not how.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: can I expose a class from another .net namespace as a class in my namespace? Can I expose a class from another .net namespace as a class in my namespace? I use a class - antlr.collections.AST - as the return type for a function belonging to a class in my namespace; as a result, the user has to have
using antlr.collections;
using myNamespace;
at the top of their files in order to use my function. Can I make myNamespace.AST an alias for antlr.collections.AST, such that the user only has to have
using myNamespace;
at the top of their files?
A: Bear in mind that the consumers of your code won't actually need to have using statements. Those are there to make their lives easier, so they don't have to type antlr.collections.Foo and antlr.collections.Bar all over their source.
The bigger "impact" (if indeed there really is a severe one) is that the consumer of your code will need a hard reference to the assembly where antlr.collections is defined.
However, if that's documented up front, I honestly don't see it being that big of a problem. It's no different than the consumer of a SubSonic-generated DAL needing references both to the generated DAL assembly and the original SubSonic assembly. (And, quite possibly, using statements as well.)
Dependencies are what they are. There's a reason classes are broken into namespaces -- primarily for organization and to reduce naming conflicts. Not knowing what classes are in the namespace you mention, I don't know how likely such a conflict actually is in your scenario ... But attempting to move the class from one namespace to another, or to hide the fact that such is needed by deriving a blank class from it, is probably not the best idea. It won't kill the consumers of your class to have another reference and using statement.
A: How about deriving a class using the same name in the new namespace? I meant:
namespace MyForms {
class Class1 : Some.Other.Namespace.Class1 {
// ...
}
}
A: create a new class that inherits the class in your new namespace. It's not ideal, but it's useful for unit testing and the like.
You should think about why you are doing this though, classes are broken up into namespaces for a reason.
A: No, you can't.
The full path to and name of a class is part of its identity.
A: If you derive from the class and return your derived class, you'll make yourself responsible for providing all of the documentation for the return type.
I think you'll be doing the developers who use your library a disservice because they won't necessarily know that what they're really working with is a type from antir.collections (not that I even know what that is, but that's not the point). If the developer comes to StackOverflow.com searching for information on that return type, are they more likely to find information if the type is from a "common" library, or from yours?
A: The only solution is to hide the whole dependency to the type antlr.collections.AST.
You can use an Adapter fot that purpose.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Feature branches in CVS? I'm duty-bound by policy to use CVS in this certain project, so even though I'd really to switch to something else, like Git, I cannot.
So, my real question goes like this: We have a convention that we create a new branch in CVS every time we make a release (we also tag, but that is besides the point). We call these version-branches, and they allow us to easily check out a specific version and make hot-fix changes to it - this is what our minor-releases are.
But now I have some large-ish, risk-ridden changes coming up and if I was working in Git, I'd be creating a feature-branch in the blink of an eye. However, working in CVS, I tried creating feature branches in another project and found that things quickly turned out messy. I ended up with lots of branches and I lost track of which branches were synched, which needed merging and which were no longer in use.
So, inching closer to the question mark, is it feasible to use feature-branches in CVS? Are they too much trouble to be worth it or will I eventually end up being sorry for not using them? Should I bite the bullet and just start coding in HEAD but bend my coding process to introduce the changes in the most unobtrusive way possible?
A: If you are the only one developing on the feature-branch, you could simply use Git as your "sandbox development" system and then once you have the changes done, merge them into your CVS repository.
You still gain the benefit of source control for your intermediate work product.
A: There is an excellent discussion of branching strategies called streamed lines which might help - it describes the advantages and disadvantages of using feature branches.
It also covers options for code line owenership and policies that you might like to implement
A: One thing to consider is to actually close the feature-branch when you are done with it, once you have merged it back with the main trunk. In this context, close simply means abandon the branch, not a real deletion.
Once the work is merged, there really is no need for the branch to "exist".
In order to quickly identify what branches are feature branch, I would suggest having a naming convention leak "FEAT_MY_FEATURE" or "FEAT_20080926" (start date?). This would make it easy to disregard all those feature branches when browsing the repository structure.
A: I have worked in an environment for several years where this was common practice and it was really painful. Make sure that the merges are part of your project plan because they are going to consume a lot of time and are sources of delay.
Documenting the branches and assigning them specific responsibilities helped a little.
We had to create a tool to help us merge changes incrementally (one change at a time, instead of merging the tip of branches) because CVS does not behave well if the two branches diverge.
Often synchronize (at least once a week).
I think the best way to approach this in retrospect would be to make sure that divergence remains minimal and splitting the risky development in different milestones by using Scrum for example.
I also encourage you to read SCM Patterns. This books contains a lot of good advices.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do I redirect a page after successful validation with form_remote_for()? I'm using form_remote_for() to do validation on a form's data so that my rails app can show the user errors without reloading the page. I'm having troubles redirecting the page if there are no errors. Here is my code:
def create
# Validate the data
if error_count > 0
render :update do |page|
page.replace_html(div_id, error_text)
end
else
# Save the data
redirect_to(page_to_go_to)
end
end
Everything in this method works fine except the redirect. The server claims it does a successful redirect and I believe it's the AJAX call that's being redirected. What I want to happen is have the page that contains the form to redirect. Is that possible?
A: I thought about this some more and realized that I needed the page to redirect instead of the request I was handling. So I tried the following and it works great.
def create
# Validate the data
if error_count > 0
render :update do |page|
page.replace_html(div_id, error_text)
end
else
# Save the data
render :update do |page|
page.redirect_to(page_to_go_to)
end
end
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I determine if a JavaScript variable is defined in a page? How can i check in JavaScript if a variable is defined in a page? Suppose I want to check if a variable named "x" is defined in a page, if I do if(x != null), it gives me an error.
A: To avoid accidental assignment, I make a habit of reversing the order of the conditional expression:
if ('undefined' !== typeof x) {
A: The typeof operator, unlike the other operators, doens't throws a ReferenceError exception when used with an undeclared symbol, so its safe to use...
if (typeof a != "undefined") {
a();
}
A: I got it to work using if (typeof(x) != "undefined")
A: You can do that with:
if (window.x !== undefined) {
// You code here
}
A: As others have mentioned, the typeof operator can evaluate even an undeclared identifier without throwing an error.
alert (typeof sdgfsdgsd);
Will show "undefined," where something like
alert (sdgfsdgsd);
will throw a ReferenceError.
A: Assuming your function or variable is defined in the typical "global" (see: window's) scope, I much prefer:
if (window.a != null) {
a();
}
or even the following, if you're checking for a function's existence:
if (window.a) a();
A: try to use undefined
if (x !== undefined)
This is how checks for specific Browser features are done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118"
}
|
Q: How unique is the php session id How unique is the php session id? I got the impression from various things that I've read that I should not rely on two users never getting the same sessionid. Isn't it a GUID?
A: It's not very unique as shipped. In the default configuration it's the result of a hash of various things including the result of gettimeofday (which isn't terribly unique), but if you're worried, you should configure it to draw some entropy from /dev/urandom, like so
ini_set("session.entropy_file", "/dev/urandom");
ini_set("session.entropy_length", "512");
search for "php_session_create_id" in the code for the actual algorithm they're using.
Edited to add:
There's a DFA random-number generator seeded by the pid, mixed with the time in usecs. It's not a firm uniqueness condition especially from a security perspective. Use the entropy config above.
Update:
As of PHP 5.4.0 session.entropy_file defaults to /dev/urandom or
/dev/arandom if it is available. In PHP 5.3.0 this directive is left
empty by default. PHP Manual
A: Size of session_id
Assume that seesion_id is uniformly distributed and has size=128 bits. Assume that every person on the planet logs in once a day with a persistent an new session for 1000 years.
num_sesion_ids = 1000*365.25 *7*10**9 < 2**36
collission_prob < 1 - (1-1/2**82)**(2**36) ≈ 1 - e**-(1/2**46)
≈ 1/2**46
So the the probability of one or more collision is less than one in 70 thousand billions. Hence the a 128-bit-size of the session_id should be big enough. As mentioned in other comments, the session_manager might also check that new session_id does not already exist.
Randomness
Therefore the big question I think is whether the session_id:s are generated with good pseudo randomness. On that you can never be sure, but I would recommend using a well known, and frequently used standard solution for this purpose (as you probably already do).
Even if collisions are avoided due to checking, randomness and size of session_id is important, so that hackers can not, somehow do qualified guessing and find active session_id:s with large probability.
A: Session_id can indeed be duplicated, but the probability is very low. If you have a website with a fair traffic, it may happens once in you web site life, and will just annoy one user for one session.
This is not worth to care about unless you expect to build a very high traffic website or a service for the bank industry.
A: I have not found a confirmation on this but i believe php checks if a session id already exists before creating one with that id.
The session hijacking issue people are worried about is when someone finds out the session id of an active user. This can be prevented in many ways, for more info on that you can see this page on php.net and this paper on session fixation
A: No, session id is not a GUID, but two users should not get the same session id as they are stored on the server side.
A: If you want to know how PHP generates a session ID by default check out the source code on Github. It is certainly not random and is based on a hash (default: md5) of these ingredients (see line 273-330 of code snippet):
*
*Cryptographically secure pseudorandom number generator
CPRNG
If the OS has a random source available then strength of the generated ID for the purpose of being a session ID is high (/dev/urandom and other OS random sources are (usually) cryptographically secure PRNGs). If however it does not then it is satisfactory.
The goal with session identification generation is to:
*
*minimise the probability of generating two session IDs with the same value
*make it very challenging computationally to generate random keys and hit an in use one.
This is achieved by PHP's approach to session generation.
You cannot absolutely guarantee uniqueness, but the probabilities are so low of hitting the same hash twice that it is, generally speaking, not worth worrying about.
A: You can install an alternative hash generation function if you want to customise the way the ID is generated (it's a 128bit number generated via MD5 by default). See http://www.php.net/manual/en/session.configuration.php#ini.session.hash-function
For more information on PHP sessions, try this excellent article http://shiflett.org/articles/the-truth-about-sessions which also links to other articles about session fixation and hijack.
A: You could opt to store the various session on the DB along with a a DB generate unique field; merge the two and save it in a session variable, then check that one instead the session id.
A: I know this post is very old . Yet, I am adding my answer here since I couldn't find a pertinent solution to this question, Even after posting a similar question myself. I however got a clue from a reply to my post. For those who are interested the algorithm and solution is explained here. It uses a combination of session and a different cookie.
The algorithm in brief is like this
session handling will be done with custom class 'MySessionHandler' using DB
1.just prior to session_start, a cookie cookie_start_time is set to current time. life time of this cookie will be same as that of session. Both uses the variable $this->cookieLifeTime to set life time.
*in session ‘_write’ we will set that value to db table field cookie_start_time same as $this->cookieStartTime
*in session ‘_read’ we do a check
if($getRowsOfSession[0]['cookie_start_time'] != $this->cookieStartTime).
if it returns true, that means this is a duplicate session and the user is redirected to destroy the session and again redirected to start a new session.(2 redirections total)
A: <?php
session_start();
$_SESSION['username']="username";
?>
<!DOCTYPE html>
<html>
<head>
<title>Update</title>
</head>
<body>
<table border="2">
<tr>
<th>Username</th>
<th>Email</th>
<th>Edit</th>
</tr>
<?php
$conn=mysqli_connect("localhost","root","","telephasic");
$q2="select * from register where username = '".$_SESSION['username']."'";
$run=mysqli_query($conn, $q2);
while($row=mysqli_fetch_array($run))
{
$name=$row[1];
$email=$row[2];
?>
<tr>
<td><?php echo $name; ?></td>
<td><?php echo $email; ?></td>
<td><a href="edit.php"> Edit </a></td>
</tr>
<?php } ?>
</table>
</body>
if your username is different or unique you can use this code for session
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
}
|
Q: Vim extension (via Python)? is it possible to extend vim functionality via custom extension (preferably, written in Python)?
What I need ideally is custom command when in command mode. E.g.
ESC
:do_this
:do_that
A: Yes it is. There are several extensions on http://www.vim.org/scripts/index.php
It can be done with python as well if the support for python is compiled in.
Article about it: http://www.techrepublic.com/article/extending-vim-with-python/
Google is our friend.
HTH
A: Had a problems to compile Vim with Python.
...checking if compile and link flags for Python are sane... no: PYTHON DISABLED" in the ./configure output.
On Ubuntu 10.04 you have to install python2.6-dev. The flags for ./configure are:
--enable-pythoninterp
--with-python-config-dir=/usr/lib/python2.6/config
Make sure you got a path to directory, which contains config.c file. Also no / at the end of the path! That caused me problems.
A: vim supports scripting in python (and in perl as well, I think).
You just have to make sure that the vim distribution you are using has been compiled with python support.
If you are using a Linux system, you can download the source and then compile it with
./configure --enable-pythoninterp
make
sudo make install
Inside vim, you can type
:version
to list the available features; if it has python support, you should see a '+python' somewhere (a '-python' otherwise).
Then, to check the usage of the python module, you can type
:help python
P.S: if you're going to compile the vim sources, make sure to check the available configure options, you might need to specify --with-python-config-dir as well.
P.P.S: to create a "custom command in command mode" (if I understand correctly what you mean), you can create a function "MyFunction" in a vim script (using python or the vim scripting language) and then invoke it with
:Call MyFunction()
Check
:help user-functions
for details
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is there a way to have managed processes in Perl (i.e. a threads replacement that actually works)? I have a multithreded application in perl for which I have to rely on several non-thread safe modules, so I have been using fork()ed processes with kill() signals as a message passing interface.
The problem is that the signal handlers are a bit erratic (to say the least) and often end up with processes that get killed in inapropriate states.
Is there a better way to do this?
A: Depending on exactly what your program needs to do, you might consider using POE, which is a Perl framework for multi-threaded applications with user-space threads. It's complex, but elegant and powerful and can help you avoid non-thread-safe modules by confining activity to a single Perl interpreter thread.
Helpful resources to get started:
*
*Programming POE presentation by Matt Sergeant (start here to understand what it is and does)
*POE project page (lots of cookbook examples)
Plus there are hundreds of pre-built POE components you can use to assemble into an application.
A: You can always have a pipe between parent and child to pass messages back and forth.
pipe my $reader, my $writer;
my $pid = fork();
if ( $pid == 0 ) {
close $reader;
...
}
else {
close $writer;
my $msg_from_child = <$reader>;
....
}
Not a very comfortable way of programming, but it shouldn't be 'erratic'.
A: Have a look at forks.pm, a "drop-in replacement for Perl threads using fork()" which makes for much more sensible memory usage (but don't use it on Win32). It will allow you to declare "shared" variables and then it automatically passes changes made to such variables between the processes (similar to how threads.pm does things).
A: From perl 5.8 onwards you should be looking at the core threads module. Have a look at http://metacpan.org/pod/threads
If you want to use modules which aren't thread safe you can usually load them with a require and import inside the thread entry point.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Any way to automatically wrap comments at column 80 in Visual Studio 2008? ..or display where column 80 is? Is there any way to automatically wrap comments at the 80-column boundary as you type them? ..or failing that, any way to display a faint line at the coulmn 80 boundary to make wrapping them manually a little easier?
Several other IDEs I use have one or other of those functions and it makes writing comments that wrap in sensible places much easier/quicker.
[Edit] If (like me) you're using Visual C++ Express, you need to change the VisualStudio part of the key into VCExpress - had me confused for a while there!
A: This is provided as a sample macro:
Macros.Samples.VSEditor.FillCommentParagraph
The first time you run it it'll ask you what fill width you want (i.e. 80). I bind this to Alt-Q since I'm an Emacs refugee. After that you just move into the comment you want to format, run the command, and it'll wrap your comments suitably.
It ain't perfect, but it's pretty good.
A: For Visual C 2008 Express users (like me) you'll need:
HKEY_CURRENT_USER\Software\Microsoft\VCExpress\9.0\Text Editor
Add a string value called Guides with the following value (as per the other responses):
RGB(180,180,255) 80
A: See Blogpost from Sara Ford: http://blogs.msdn.com/saraford/archive/2004/11/15/257953.aspx
A: In order to make Visual Studio text editor show a faint line on the 80th column you open RegEdit and locate the following:
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\Text Editor
Please notice that 9.0 is for Visual Studio 2008. You should put 8.0 if you have Visual Studio 2005.
You create a new String value named Guides and enter the following value:
RGB(128,0,0) 80
You can adjust the RGB color to the value you like. The number 80 is the column you want the line to appear at. You can add another line (although I don't see how this can help) like that:
RGB(128,0,0) 2,80
This will make two lines appear, one at the 2th column and one at the 80th column.
A: Take a look at the question here: Hidden Features of Visual Studio (2005-2010)?
It shows how to do that:
"Under "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor" Create a String called "Guides" with the value "RGB(255,0,0) 79" to have a red line at column 80 in the text editor."
A: HKCU\Software\Microsoft\VisualStudio\9.0\Text Editor\Guides = [REG_SZ] "RGB(192,192,192) 80"
(Looking at my 8.0 registry, so I'm not 100% certain)
A: By the way, in addition to the rightmost guide as per comments above, I also set lower contrast guides for columns 4, 8, 12, 16 etc. This really helps with code readability.
A: SlickEdit Tools for Visual Studio. There is a very good real time comment wrapper that automatically adjust length of lines as you type.
http://www.slickedit.com/products/slickedit-tools
A: Take a look at http://www.kynosarges.de/CommentReflower.html.
Comment Reflower for Visual Studio
Comment Reflower is an essential add-in for Microsoft Visual Studio that provides configurable automatic reformatting of block comments, including XML comments.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Security implications of disabling the Common Name check for HTTPS I'm going over some client code I've inherited for doing secure communication over HTTPS, and it seems that it's not checking the common name in the server certificate (eg. 'CN = "example.com"' against the actual URL that's being requested. This is probably deliberate, since our client app is required to talk to various environments, so after contacting an initial portal (eg. example.com/main) and the user choosing an environment the app gets redirected to a specific IP, so all future requests look something like "http://127.0.0.1/page".
However being an SSL newbie, I'm unsure of the implications of disabling this check. My first reaction would be that it'd be easier to perform some kind of man-in-the-middle attack, since someone else could just copy our certificate and pretend to be one of our servers. But if we were doing common name checking you'd be able to do the same thing with custom DNS settings anyway, so it doesn't seem to actually gain us anything. Are there other attacks that this leaves us open to which we wouldn't be otherwise?
Thanks
A: If you control the client code, then you can restrict the trusted CAs to just your own. Then the domain check is less important - any of your servers can pretend to be another one.
If you don't control the client code, then a cert signed by a trusted CA can be substituted for yours.
A: $0.02: using CN for host names is deprecated, X.509 Subject Alternate Names should be used instead.
A: *
*Verifying the certificate itself and that it can be chained to a CA certificate you already trust allows you to check that the certificate is genuine and valid.
*Checking the host name in the certificate allows you to check you're talking with the server you intended to talk to, provided you've verified the certificate to be valid indeed.
*(Checking that the remote party is indeed the one holding the private key for that certificate is done within the SSL/TLS handshake.)
If you want an analogy with passport/ID checking for people:
*
*Verifying the certificate is like checking that a passport or a form of ID is genuine. You can decide which forms of ID you want to accept from a person (e.g. passport, driving licence, staff card, ...) and which issuer countries you trust to be able to verify their authenticity.
*Checking that the remote party is the one holding the private key is similar to checking that the picture on the passport/ID matches the face of the person in front of you.
*Checking the host name is like checking the passport belongs to the person whose name is the one you're looking for.
If you don't check the host name, anyone with a valid passport that you consider genuine could come to you and claim they're the one you're looking for (by name).
In very limited set of circumstances, where you only trust a specific CA or self-signed cert where you allow any potential certificate to impersonate any other in the entire set of certificates you trust, it can be acceptable to ignore this verification, but this is very rare, and not good practice.
Checking that the name in the passport matches the name of the person you're looking for would be considered common sense; do it for certificates too. Not doing so allows anyone who has a certificate that you trust as genuine to impersonate any other certificate you would trust, thereby potentially perform MITM attacks.
The HTTPS host name verification rules are defined in RFC 2818 Section 3.1 (also more recently in a "best practices" spec, RFC 6125, not much implemented yet).
In short, the host name should be in a Subject Alternative Name DNS entry (although you can fall back on the CN of the Subject DN where there's no SAN in the certificate). When you're using an IP address, the IP address must be in a SAN IP-address entry (although some browsers will let you get away with the IP address in the CN of the Subject DN).
A: Someone else can't just copy your certificate and use it because they don't have your private key.
If you don't check that the certificate's CN doesn't match the domain name then they can simply create their own certificate (and have it signed by a trusted CA so it looks valid), use it in place of yours, and perform a man in the middle attack.
Also, you need to be checking that the certificate comes from a trusted CA. It's the CA's job to make sure that you can only get a certificate with the CN= if you actually control that domain.
If you skip either of these checks then you are at risk of a MITM attack.
See also this answer for a different approach that will work if you have sufficient control over the client.
A: To do the same thing with "custom DNS settings" the attacker should exploit a DNS server (yours or a client's) to point example.com to an IP he controls, as opposed to just copying the certificate. If possible I'd create all the specific apps as subdomains of example.com and use a wildcard certificate (*.example.com) to be able to validate the CN.
A: Hostname verification (verifying the CN part) guarantees that the other end of the connection (server) is having a SSL Certificate issues with the domain name you typed in the address bar. Typically an attacker will not be able to get such a certificate.
If you don't verify the hostname part, somebody (somebody sit at any of the routers or proxies the request passes though) could do a man in the middle attack. Or somebody could do exploit some DNS attacks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Debugging with Oracle's utl_smtp A client of mine uses Oracle 9i's utl_smtp to send mails out notifications to managers when their employees have made travel requests and they woul like quite a few changes made to the mailouts done.
We're having a lot of problems getting utl_smtp to talk to any smtp server on our network. We've even tried installing free smtp server on the oracle box but it will not spot the mail server running on port 25. The error code is ORA-29278.
So two questions really.
*
*Does anyone have any experience setting up email using Oracle's utl_smtp utility and have any suggestions as to where we might be going wrong.
*Does anyone know if it is possible to get utl_smtp to dump text emails to a directory much as you can do if you're using system.net.mail's specifiedpickupdirectory config setting. This would be by far the preferable option.
Thanks, Dan
A: Looks like the HELO is the problem. Please can we check with a simple testcase...
set serveroutput on
declare
lConnection UTL_SMTP.CONNECTION;
begin
lConnection := UTL_SMTP.OPEN_CONNECTION(your_smtp_server);
DBMS_OUTPUT.PUT_LINE('Opened ok');
UTL_SMTP.HELO(lConnection, your_client_machine_name);
DBMS_OUTPUT.PUT_LINE('HELO ok');
UTL_SMTP.MAIL(lConnection, your_email_address);
UTL_SMTP.RCPT(lConnection, your_email_address);
DBMS_OUTPUT.PUT_LINE('Addressing ok');
end;
/
A: Looks like we've resolved this.
To answer the two questions.
*
*Double check that the schema calling utl_smtp has execute permissions on sys.utl_smtp, sys.utl_tcp and sys.dbms_lob. Also check that at no time the message being sent is > 32Kb.
*No there is no way to get utl_smtp to dump emails to a directory a la system.net.mail.
Thanks to cagcowboy for the help.
A: Yes, we can telnet to the server.
-- ****** Object: Stored Procedure TRAVELADMIN_DEV.HTML_EMAIL Script Date: 22/08/2008 12:41:02 ******
CREATE PROCEDURE "HTML_EMAIL" (
p_to in varchar2,
p_cc in varchar2,
p_from in varchar2,
p_subject in varchar2,
p_text in varchar2 default null,
p_html in varchar2 default null
)
is
l_boundary varchar2(255) default 'a1b2c3d4e3f2g1';
l_connection utl_smtp.connection;
l_body_html clob := empty_clob; --This LOB will be the email message
l_offset number;
l_ammount number;
l_temp varchar2(32767) default null;
p_smtp_hostname varchar2(30):= 'rockies';
p_smtp_portnum varchar2(2) := '25';
begin
l_connection := utl_smtp.open_connection( p_smtp_hostname, p_smtp_portnum );
utl_smtp.helo( l_connection, p_smtp_hostname );
utl_smtp.mail( l_connection, p_from );
utl_smtp.rcpt( l_connection, p_to );
l_temp := l_temp || 'MIME-Version: 1.0' || chr(13) || chr(10);
l_temp := l_temp || 'To: ' || p_to || chr(13) || chr(10);
IF ((p_cc <> NULL) OR (LENGTH(p_cc) > 0)) THEN
l_temp := l_temp || 'Cc: ' || p_cc || chr(13) || chr(10);
utl_smtp.rcpt( l_connection, p_cc );
END IF;
l_temp := l_temp || 'From: ' || p_from || chr(13) || chr(10);
l_temp := l_temp || 'Subject: ' || p_subject || chr(13) || chr(10);
l_temp := l_temp || 'Reply-To: ' || p_from || chr(13) || chr(10);
l_temp := l_temp || 'Content-Type: multipart/alternative; boundary=' ||
chr(34) || l_boundary || chr(34) || chr(13) ||
chr(10);
----------------------------------------------------
-- Write the headers
dbms_lob.createtemporary( l_body_html, false, 10 );
dbms_lob.write(l_body_html,length(l_temp),1,l_temp);
----------------------------------------------------
-- Write the text boundary
l_offset := dbms_lob.getlength(l_body_html) + 1;
l_temp := '--' || l_boundary || chr(13)||chr(10);
l_temp := l_temp || 'content-type: text/plain; charset=us-ascii' ||
chr(13) || chr(10) || chr(13) || chr(10);
dbms_lob.write(l_body_html,length(l_temp),l_offset,l_temp);
----------------------------------------------------
-- Write the plain text portion of the email
l_offset := dbms_lob.getlength(l_body_html) + 1;
dbms_lob.write(l_body_html,length(p_text),l_offset,p_text);
----------------------------------------------------
-- Write the HTML boundary
l_temp := chr(13)||chr(10)||chr(13)||chr(10)||'--' || l_boundary ||
chr(13) || chr(10);
l_temp := l_temp || 'content-type: text/html;' ||
chr(13) || chr(10) || chr(13) || chr(10);
l_offset := dbms_lob.getlength(l_body_html) + 1;
dbms_lob.write(l_body_html,length(l_temp),l_offset,l_temp);
----------------------------------------------------
-- Write the HTML portion of the message
l_offset := dbms_lob.getlength(l_body_html) + 1;
dbms_lob.write(l_body_html,length(p_html),l_offset,p_html);
----------------------------------------------------
-- Write the final html boundary
l_temp := chr(13) || chr(10) || '--' || l_boundary || '--' || chr(13);
l_offset := dbms_lob.getlength(l_body_html) + 1;
dbms_lob.write(l_body_html,length(l_temp),l_offset,l_temp);
----------------------------------------------------
-- Send the email in 1900 byte chunks to UTL_SMTP
l_offset := 1;
l_ammount := 1900;
utl_smtp.open_data(l_connection);
while l_offset < dbms_lob.getlength(l_body_html) loop
utl_smtp.write_data(l_connection,
dbms_lob.substr(l_body_html,l_ammount,l_offset));
l_offset := l_offset + l_ammount ;
l_ammount := least(1900,dbms_lob.getlength(l_body_html) - l_ammount);
end loop;
utl_smtp.close_data(l_connection);
utl_smtp.quit( l_connection );
dbms_lob.freetemporary(l_body_html);
end;
A: *
*The OPEN_CONNECTION parameter should be the FQDN or IP address of the server you're connecting to.
*The HELO parameter should be the FQDN of the machine you're connecting from.
If this doesn't work, do you know which line it errors on?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Installer gives 2732 error :Directory Manager not initialized I have an msi installer which was working fine. I added an external merge module. There were some directory merge errors during compilation. I removed the directories causing the error from the directory table of the merge module.
I am getting the error:
MSI error 2732 error :Directory Manager not initialized.
Please help in solving the issue.
A: The Windows Installer Error Messages documentation for error 2732 says:
"The directory manager is responsible for determining the target and source paths. It is initialized during the costing actions (CostInitialize action, FileCost action, and CostFinalize action). A standard action or custom action made a call to a function requiring the directory manager before the initialization of the directory manager. This action should be sequenced after the costing actions."
A: One possibility is that you may not have put a backslash (\) after a directory path in the set directory action
or
the custom action should be CostInitialize
A: As Mike Dimmick said,
get the sequence number of CostIntialize from the "InstallUISequence" table.
Now go to the InstallExecuteSequence table and find your custom action and update the sequence value to the CostIntialize value.
It worked fine for me. You have to check for your case.
A: Another possible place to look at is the installation log.
Try installing the package using logging:
msiexec /i <package.msi> /l*v <logfile>
Inspecting the log looking for the line containing "Return value 3.". The failed custom action will show right above.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Web Applications & Desktop Applications I am a programmer who writes a lot of code for desktop applications, now started considering cross-platform apps as an issue but at work I write C# apps and I come from C++ and CS background and of course, I wrote several things in QT/C++. But now I am kinda confused about web applications, I have done some work on PHP and I know how things go there, I was a gmail and google docs user for a lot of time and I have seen how much web applications were improved with new web 2.0 technology including Ajax, XML so on. And my confusion is that should I start looking forward for web application development? and continue exploring the power of web 2.0 or I have to just stick with my old world where I feel very comfortable on parallelism and other stuff? Because believe me I had too many offers to work as a web application developer but I didn't realize this opportunity and now I am kinda confused whether I must start writing web apps. Have you been writing desktop applications and switched to web? or have somebody experience in this scenario?
Thank you.
A: It's all about what kind of programs you want to be writing. End-user apps already have already started a significant move to being web-oriented, because of the advantages that some companies find in outsourcing their data handling and IT infrastructure. Because this area of development is a new and growing sector, I have no doubt that you will be getting all kinds of offers, and hearing all about new startups and so forth that are centered on developing these kinds of applications.
That doesn't mean that desktop apps are going to go away. Some companies, and lots of private individuals like to have a sense of being in physical possession of their data, and see no monetary benefit in "renting" an online app or in outsourcing their data handling. These people are going to keep the desktop app market open in the foreseeable future, although perhaps not to the extent that we have seen previously.
So at this point, you needn't feel forced to make a move into the web game, but there are certainly opportunities there if you want them.
A: The boundaries between desktop and web applications have really blurred. Whilst once upon a time the nature of developing for the web was totally different to developing for the desktop, nowadays you find the same concepts (such as parallelism which you referred to) cropping up in both. Don't think of developing web applications as taking a huge step away from traditional software development as you'll employ just as many skills and concepts as you already use. You wouldn't need to learn a whole lot more to get involved in web development if you have C# experience, as you could code backends to web applications in a very similar way to how you currently work. If you wanted/needed to get involved in the UI side of things, there are new technologies you'd need to pick up, but they're not essential to get a job in web development (as long as you weren't looking for a frontend role obviously).
To follow up Dustman's comments about companies wanting to keep tight control of their data etc; bear in mind that not all "web applications" involve the use of the internet. Really all the term means is "applications developed on web-based technologies" and as well as being deployed publically on the web, they're commonly deployed on intranets and other closed-access environments. I work for a software company which develops "web applications" but a large number of systems are hosted by clients for use on their internal networks for the very reasons Dustman refers to - they want to keep tight control of their data. The beauty of web based technologies is that you can achieve this whilst still reaping the benefits of a centralised system, meaning there is no need to manage deployment across 100s of workstations, no need to worry too much about the specifications of client devices, the ability to access the system across different types of device (mobile etc), regular and easily deployed updates, and so I could continue.
A: In the near future, the boundary between the web development and the desktop application development will go on erasing. For a professional programmer, learning new things is the real growth. learning web development for an experienced programmer is not a difficult task. you can obviously go ahead and learn the web development. You should recognize web well as it will certainly come to meet the desktop apps in the near future.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I "smoothly" format HttpHandler URI? I'm just meddling in the ways of the RESTful web service in C# using ASP.Net 2.0 and have managed (via a class library, a reference to dll produced by the former and some adjustment of my web.config) to coax out a URI format like so:
http: //localhost/DevelopmentProject/testhandler/?input=thisismyinput
Which unremarkably just returns the input as a piece of text with the enlightening prefix "Your Input Was: "
I was under the impression that I could get the URI to become further ensmoothened to something more along the lines of:
http: //localhost/DevelopmentProject/testhandler/thisismyinput
and have the same result but have no idea how to get rid of the pesky "?input="
The entry to the httphandlers section of my web.config is (spaces added so code displays):
< add verb="*" path="testhandler/*" type="HandlerLib.testhandler, HandlerLib"/ >
I am running IIS 5.1 on the local machine, will this introduce a problem?
Essentially where am I going wrong?
Thanks.
A: One solution is to use UrlRewriting to rewrite the Url to what you need.
I use http://urlrewriter.net/ to do all my rewriting, and you could setup something like this in your scenario
<rewriter>
<rewrite
url="DevelopmentProject/testhandler/([\w]+)"
to="DevelopmentProject/testhandler/?input=$1" />
</rewriter>
This would remain "http: //localhost/DevelopmentProject/testhandler/thisismyinput" in your browser address bar, yet process as "http: //localhost/DevelopmentProject/testhandler/?input=thisismyinput"
A: You could implement URL rewriting, using something like URLRewriter.net
That would let you use the syntax you've mentioned.
A: I kinda cheated.
Try:
My Article About How I Got Round It
A: Change your config from:
< add verb="" path="testhandler/" type="HandlerLib.testhandler, HandlerLib"/ >
to:
< add verb="" path="testhandler/*" type="HandlerLib.testhandler, HandlerLib"/ >
A: Check out the value of Request.PathInfo in your handler's ProcessRequest function
with a URL like http://localhost/DevelopmentProject/testhandler/thisismyinput.
If that doesn't do it, make sure that IIS 5.1 is routing ALL requests to the aspnet_isapi.dll. (Although, it seems like it already is) This is the "Configuration..." button > "App Mappings" tab in your virtual directory in IIS.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/138771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.